CN109547850B - Video shooting error correction method and related product - Google Patents

Video shooting error correction method and related product Download PDF

Info

Publication number
CN109547850B
CN109547850B CN201811400187.9A CN201811400187A CN109547850B CN 109547850 B CN109547850 B CN 109547850B CN 201811400187 A CN201811400187 A CN 201811400187A CN 109547850 B CN109547850 B CN 109547850B
Authority
CN
China
Prior art keywords
video
time period
file
same
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811400187.9A
Other languages
Chinese (zh)
Other versions
CN109547850A (en
Inventor
王晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qiucha Network Technology Co.,Ltd.
Original Assignee
Hangzhou Qiucha Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qiucha Network Technology Co ltd filed Critical Hangzhou Qiucha Network Technology Co ltd
Priority to CN201811400187.9A priority Critical patent/CN109547850B/en
Publication of CN109547850A publication Critical patent/CN109547850A/en
Application granted granted Critical
Publication of CN109547850B publication Critical patent/CN109547850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The present disclosure provides a video shooting error correction method and related products, the method comprising the steps of: the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file; the mobile terminal extracts background music of the first video file and determines a first time period and a second time period corresponding to the same paragraph in the background music; the mobile terminal compares a second video in a first time period and a third video in a second time period in the first video file to determine different parts of the second video and the third video, and performs error correction processing on the different parts of the first video file to obtain an error-corrected fourth video file. The technical scheme provided by the application has the advantage of error correction.

Description

Video shooting error correction method and related product
Technical Field
The invention relates to the technical field of culture media, in particular to a video shooting error correction method and a related product.
Background
The short video is short video, which is an internet content transmission mode, and is generally video transmission content transmitted on new internet media within 1 minute.
The shooting time of the short video does not necessarily correspond to the playing time of the short video, for example, the time of one short video is 30 seconds, but the shooting time of the short video may be 2 minutes, the 2 minutes of the shot video is compressed to 30 seconds through fast forward playing and other modes to improve the watching effect of the video, the video needs to be shot once successfully in the existing video shooting, which is difficult for a user, the user may need to shoot several times to obtain a successful video, and therefore the error correction of the video cannot be realized in the existing shot video, and the experience degree of the user is influenced.
Disclosure of Invention
The embodiment of the invention provides a video shooting error correction method and a related product, which can realize error correction of places where video shooting errors occur and have the advantage of improving user experience.
In a first aspect, an embodiment of the present invention provides a video shooting error correction method, where the method includes the following steps:
the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
the mobile terminal extracts background music of the first video file and determines a first time period and a second time period corresponding to the same paragraph in the background music;
the mobile terminal compares a second video in a first time period and a third video in a second time period in the first video file to determine different parts of the second video and the third video, and performs error correction processing on the different parts of the first video file to obtain an error-corrected fourth video file.
Optionally, the same paragraph specifically includes:
the same parts of the lyrics of the music file.
Optionally, the comparing the second video with the third video to determine different parts of the second video and the third video specifically includes:
dividing the second video into n parts according to each word of the lyrics of the same paragraph, dividing the third video into n parts according to each word of the lyrics of the same paragraph, comparing the video frames of the two parts of the same word to determine whether the video frames are consistent, if so, determining the video frames are the same, and if not, determining the video frames are not the same.
Optionally, the obtaining of the fourth video file after error correction by performing error correction processing on the different parts of the first video file specifically includes:
deleting videos of the time periods corresponding to the different parts of the first video file to obtain an error-corrected fourth video file;
or selecting the videos of the first time period and the second time period of the first video file as the second video or the third video.
In a second aspect, a terminal is provided, which includes: a camera, a processor, a memory and a display screen,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the processor is further configured to compare a second video in a first time period and a third video in a second time period in the first video file with the third video to determine different portions of the second video and the third video, and perform error correction processing on the different portions of the first video file to obtain an error-corrected fourth video file.
Optionally, the same paragraph specifically includes: the same parts of the lyrics of the music file.
Optionally, the processor is specifically configured to divide the second video into n portions according to each word of the lyrics of the same paragraph, divide the third video into n portions according to each word of the lyrics of the same paragraph, compare video frames of the two portions of the same word to determine whether the video frames are consistent, if so, determine that the video frames are the same portion, if not, determine that the video frames are not the same portion.
Optionally, the processor is specifically configured to delete the videos of the different portions of the first video file in the time period to obtain an error-corrected fourth video file; or selecting the videos of the first time period and the second time period of the first video file as the second video or the third video.
Optionally, the terminal is: tablet computers or smart phones.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that, after the shot first video file is determined, the second video and the third video of the same section in the first video file are compared to determine different parts, and then the different parts are corrected, so that the error correction processing on the video can be realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mobile terminal.
Fig. 2 is a flow chart diagram of a video shot error correction method.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a mobile terminal, which may specifically be a smart phone, where the smart phone may be a mobile terminal of an IOS system, an android system, and the mobile terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
Referring to fig. 2, fig. 2 provides a video photographing error correction method, which is shown in fig. 2 and is performed by the mobile terminal shown in fig. 1, and which includes the steps of:
step S201, a mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
step S202, the mobile terminal extracts background music of the first video file and determines a first time period and a second time period corresponding to the same paragraph in the background music;
the background music may be regular music, and the same section, i.e. the section with the same lyrics inside the music, such as "learning cat shout" we learn cat shout together, and meow "belongs to the same section, and has the same section for most of music.
Step S203, comparing the second video and the third video in the first video file of the mobile terminal in the first time period and the third video in the second time period to determine different parts of the second video and the third video, and performing error correction processing on the different parts of the first video file to obtain an error-corrected fourth video file.
According to the technical scheme, after the shot first video file is determined, the second video and the third video of the same section in the first video file are compared to determine different parts, and then the different parts are corrected, so that the error correction processing on the video can be realized.
The principle of the application is that for shooting videos, the videos with the same action can be directly corrected on the basis that the videos with the same action are provided, and the action with the same paragraph is the same through analysis of big data, so that different parts are determined by comparing the videos corresponding to two same paragraphs, and then the fourth video file after error correction can be obtained through error correction processing of the different parts, so that the experience degree of a user on video processing is improved, and shooting is not needed again.
The determining of the different portions of the second video and the third video by comparing the second video and the third video may specifically include:
dividing the second video into n parts (n is the lyric word number or letter number of the same paragraph) according to each word of the lyric of the same paragraph, dividing the third video into n parts according to each word of the lyric of the same paragraph, comparing the video frames of the two parts of the same word to determine whether the video frames are consistent, if so, determining the video frames are the same part, if not, determining the video frames are different parts.
The above comparison of the video frames of two parts of the same word to determine whether the video frames are consistent can be determined by the existing video comparison method.
Optionally, the performing error correction processing on the different parts of the first video file to obtain an error-corrected fourth video file may specifically include:
deleting videos of the time periods corresponding to the different parts of the first video file to obtain an error-corrected fourth video file;
or selecting the videos of the first time period and the second time period of the first video file as the second video or the third video.
The first way is to delete different parts directly, so that it is not necessary to determine which different parts belong to the wrong place, which has the disadvantage that the video effect is affected, and the second way is to select the same video in the same time period, because the video action is the same for the same time period, this selection will improve the video effect.
The manner of selecting the second video or the third video may specifically be that the fluency of the second video and the third video is determined, and the second video with higher fluency is selected to replace the third video with lower fluency.
The method for determining fluency may specifically be that adjacent frames of the second video are compared to determine whether the adjacent frames are the first number of consistent image frames, and adjacent frames of the third video are compared to determine whether the adjacent frames are the second number of consistent image frames, and if the first number is smaller than the second number, it is determined that the fluency of the second video is greater than that of the third video.
The principle is based on the principle that for video shooting, the more consecutive the motion is, the smaller the probability that the adjacent frames are consistent, whereas if there is an error, the person has a certain pause at the time of the erroneous motion, so that the same adjacent frames occur, and the more such cases, the poorer the fluency is.
Optionally, the method may further include:
and carrying out flattening processing on the video frames with double chin in the fourth video file.
The above-mentioned flattening processing of the video frame with double chin may specifically include:
extracting a p video frame of a fourth video file, obtaining a p video frame line route, determining a region with 2 adjacent line lines and the distance between the 2 line lines being less than a set distance as a region to be determined, constructing an equidistant line of lambda line routes for the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent, if so, determining that the two lines are consistent (if the two lines are non-double chin regions, the pictures are consistent, the picture comparison method can be realized through the existing comparison method), determining that the region to be determined is non-double chin regions, if not, determining that the two lines are non-double chin regions (if the two lines are non-uniform in distribution of meat, pictures in each region are different), determining that the region to be determined is double chin regions, constructing an upper equidistant line in the upper region of the double chin regions, extracting a gamma picture between the upper equidistant line and the upper line, and replacing the gamma picture with the gamma line after extending downwards by lambda-1 (namely, each equidistant line is replaced by the gamma picture), replacing the gamma picture with the gamma picture in the gamma line region to And (6) processing.
The obtaining of the pth video frame texture line may specifically include:
marking the RGB values of all pixel points of the pth video frame into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
For the line path, it divides the skin into a plurality of areas, the RGB value of the areas is the same because it is the skin, then when marking with color, the color of the skin is the same color, but for the line path, because of its folding and light, the line path is different from the RGB value of the skin, because of marking different color, the color of the line path is inconsistent with the color of the skin area, and it is a line segment, in addition, the line path with double chin has at least 2 or more, therefore the number is also more than 2. The grain lines can be distinguished by this method.
Referring to fig. 3, fig. 3 provides a terminal including: a camera, a processor and a display screen,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the processor is further configured to compare a second video in a first time period and a third video in a second time period in the first video file with the third video to determine different portions of the second video and the third video, and perform error correction processing on the different portions of the first video file to obtain an error-corrected fourth video file.
Embodiments of the present invention also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute part or all of the steps of any one of the video shooting error correction methods as recited in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the video capture error correction methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for video capture error correction, the method comprising the steps of:
the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
the mobile terminal extracts background music of the first video file and determines a first time period and a second time period corresponding to the same paragraph in the background music;
the mobile terminal compares a second video in a first time period and a third video in a second time period in the first video file to determine different parts of the second video and the third video, and performs error correction processing on the different parts of the first video file to obtain an error-corrected fourth video file.
2. The method according to claim 1, wherein the same paragraph specifically comprises:
the same parts of the lyrics of the music file.
3. The method of claim 1, wherein comparing the second video to the third video to determine different portions of the second video and the third video comprises:
dividing the second video into n parts according to each word of the lyrics of the same paragraph, dividing the third video into n parts according to each word of the lyrics of the same paragraph, comparing the video frames of the two parts of the same word to determine whether the video frames are consistent, if so, determining the video frames are the same, and if not, determining the video frames are not the same.
4. The method according to claim 3, wherein said performing error correction processing on the different part of the first video file to obtain an error-corrected fourth video file specifically comprises:
deleting videos of the time periods corresponding to the different parts of the first video file to obtain an error-corrected fourth video file;
or selecting the videos of the first time period and the second time period of the first video file as the second video or the third video.
5. A terminal, the terminal comprising: a camera, a processor and a display screen, which is characterized in that,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the processor is further configured to compare a second video in a first time period and a third video in a second time period in the first video file with the third video to determine different portions of the second video and the third video, and perform error correction processing on the different portions of the first video file to obtain an error-corrected fourth video file.
6. The terminal according to claim 5, wherein the same paragraph specifically includes: the same parts of the lyrics of the music file.
7. The terminal of claim 5,
the processor is specifically configured to divide the second video into n portions according to each word of the lyrics of the same paragraph, divide the third video into n portions according to each word of the lyrics of the same paragraph, compare video frames of the two portions of the same word to determine whether the video frames are consistent, if so, determine that the video frames are the same, and determine that the video frames are not the same.
8. The terminal of claim 5,
the processor is specifically configured to delete the videos of the different portions of the first video file in the time period to obtain an error-corrected fourth video file; or selecting the videos of the first time period and the second time period of the first video file as the second video or the third video.
9. A terminal according to any of claims 5-8,
the terminal is as follows: tablet computers or smart phones.
10. A computer-readable storage medium storing a program, wherein the program causes a terminal to perform the method provided in any one of claims 1 to 4.
CN201811400187.9A 2018-11-22 2018-11-22 Video shooting error correction method and related product Active CN109547850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400187.9A CN109547850B (en) 2018-11-22 2018-11-22 Video shooting error correction method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400187.9A CN109547850B (en) 2018-11-22 2018-11-22 Video shooting error correction method and related product

Publications (2)

Publication Number Publication Date
CN109547850A CN109547850A (en) 2019-03-29
CN109547850B true CN109547850B (en) 2021-04-06

Family

ID=65850014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400187.9A Active CN109547850B (en) 2018-11-22 2018-11-22 Video shooting error correction method and related product

Country Status (1)

Country Link
CN (1) CN109547850B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1666520A (en) * 2002-07-01 2005-09-07 微软公司 A system and method for providing user control over repeating objects embedded in a stream
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment
CN102024033A (en) * 2010-12-01 2011-04-20 北京邮电大学 Method for automatically detecting audio templates and chaptering videos
CN103905925A (en) * 2014-03-07 2014-07-02 深圳创维数字技术股份有限公司 Method and terminal for repeatedly playing program
CN105187692A (en) * 2014-06-16 2015-12-23 腾讯科技(北京)有限公司 Video recording method and device
CN106055659A (en) * 2016-06-01 2016-10-26 腾讯科技(深圳)有限公司 Matching method for lyrics data and equipment thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9055239B2 (en) * 2003-10-08 2015-06-09 Verance Corporation Signal continuity assessment using embedded watermarks
US20180082607A1 (en) * 2016-09-19 2018-03-22 Michael Everding Interactive Video Captioning Program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1666520A (en) * 2002-07-01 2005-09-07 微软公司 A system and method for providing user control over repeating objects embedded in a stream
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment
CN102024033A (en) * 2010-12-01 2011-04-20 北京邮电大学 Method for automatically detecting audio templates and chaptering videos
CN103905925A (en) * 2014-03-07 2014-07-02 深圳创维数字技术股份有限公司 Method and terminal for repeatedly playing program
CN105187692A (en) * 2014-06-16 2015-12-23 腾讯科技(北京)有限公司 Video recording method and device
CN106055659A (en) * 2016-06-01 2016-10-26 腾讯科技(深圳)有限公司 Matching method for lyrics data and equipment thereof

Also Published As

Publication number Publication date
CN109547850A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN108805047B (en) Living body detection method and device, electronic equipment and computer readable medium
CN110121098B (en) Video playing method and device, storage medium and electronic device
US9390475B2 (en) Backlight detection method and device
CN111553362B (en) Video processing method, electronic device and computer readable storage medium
CN110889824A (en) Sample generation method and device, electronic equipment and computer readable storage medium
CN113126937B (en) Display terminal adjusting method and display terminal
US11538141B2 (en) Method and apparatus for processing video
CN107509115A (en) A kind of method and device for obtaining live middle Wonderful time picture of playing
US20220383637A1 (en) Live streaming sampling method and apparatus, and electronic device
CN112686165A (en) Method and device for identifying target object in video, electronic equipment and storage medium
CN111401238A (en) Method and device for detecting character close-up segments in video
CN109547850B (en) Video shooting error correction method and related product
CN106970942B (en) Method and terminal for actively defending yellow-related content
CN109685015B (en) Image processing method and device, electronic equipment and computer storage medium
CN108475430B (en) Picture quality evaluation method and device
CN109361956B (en) Time-based video cropping methods and related products
US20160127194A1 (en) Electronic device and method for setting network model
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN113095058B (en) Method and device for processing page turning of streaming document, electronic equipment and storage medium
CN115019138A (en) Video subtitle erasing, model training and interaction method, device and storage medium
CN109242763B (en) Picture processing method, picture processing device and terminal equipment
CN109712103B (en) Eye processing method for self-shot video Thor picture and related product
CN109639962B (en) Self-timer short video mode selection method and related product
CN109658327B (en) Self-photographing video hair style generation method and related product
CN110851883A (en) Equipment fingerprint generation method and device based on picture drawing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Jing

Inventor before: Zhang Lei

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20210225

Address after: Room 525, building 7, 601 Qiuyi Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Applicant after: Hangzhou Qiucha Network Technology Co.,Ltd.

Address before: 518003 4K, building B, jinshanghua, No.45, Jinlian Road, Huangbei street, Luohu District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN YIDA CULTURE MEDIA Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant