WO2022062985A1 - Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal - Google Patents

Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal Download PDF

Info

Publication number
WO2022062985A1
WO2022062985A1 PCT/CN2021/118451 CN2021118451W WO2022062985A1 WO 2022062985 A1 WO2022062985 A1 WO 2022062985A1 CN 2021118451 W CN2021118451 W CN 2021118451W WO 2022062985 A1 WO2022062985 A1 WO 2022062985A1
Authority
WO
WIPO (PCT)
Prior art keywords
sliding
screen
area
special effect
finger
Prior art date
Application number
PCT/CN2021/118451
Other languages
English (en)
Chinese (zh)
Inventor
吴霞
张硕
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022062985A1 publication Critical patent/WO2022062985A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of mobile communications, and in particular, to a method, apparatus and terminal device for adding special effects to video.
  • special effects icons are usually displayed on the incoming call interface, and the user can click on the special effects icons to expand the special effects menu or the beauty adjustment bar.
  • users need to apply special effects, they need to click the special effect icon to expand the special effect editing menu, and then click to select the special effect thumbnail or slide in the menu to adjust the beauty level.
  • the user interaction process the user needs at least two clicks to activate the specified special effect, the interaction process is cumbersome, the entire process takes a long time, and the click area of the icon is relatively small, which is inconvenient to operate quickly. Therefore, it cannot meet the purpose of quickly enabling special effects in the state of incoming calls.
  • embodiments of the present application provide a method, apparatus and terminal device for adding special effects to a video, so as to solve the technical problem in the prior art that it is inconvenient and fast for a user to add special effects during a video call.
  • an embodiment of the present application provides a method for adding special effects to a video.
  • the method includes the following steps: capturing a user's gesture action, and obtaining sliding information of the gesture action; and adding special effects according to the sliding information of the gesture action.
  • the material is applied to the screen.
  • the solution provided by this embodiment is different from the problem in the prior art that the user needs at least two clicks to activate special effects during a video call, and the interaction process is cumbersome.
  • the sliding information of the action is captured and processed, which can achieve the effect of adding special effects wherever the user's finger slides.
  • the step of capturing the gesture action of the user and acquiring the sliding information of the gesture action includes: capturing and recording in real time the starting point of the user's finger sliding in the two-dimensional xy coordinate system of the screen The starting point coordinate information and the ending point coordinate information of the sliding end point; according to the starting point coordinate information and the ending point coordinate information of the finger, judge the sliding direction and sliding speed of the finger; according to the sliding direction of the finger, judge the sliding gesture adopted by the user; Wherein, the sliding information includes the starting point coordinate information of the sliding start point, the end point coordinate information of the sliding end point, the sliding gesture, the sliding direction and the sliding speed.
  • the coordinate information of the starting point of the sliding start and the coordinate information of the end point of the sliding end provide the basis for judging the sliding gesture, the sliding direction and the sliding speed.
  • the sliding gesture Provides the way the user's finger slides on the screen, the sliding direction provides the path the user's finger slides on the screen, and the sliding speed provides the speed at which the user's finger slides on the screen. Combining these sliding information can accurately know that the screen needs to be replaced The area of effect material.
  • the sliding gesture when the sliding gesture is an edge sliding gesture, the following steps are performed: according to the coordinate information of the starting point and the coordinate information of the end point of the finger, determine the coordinate displacement D of the finger on the screen The size relationship with the preset minimum distance Dmin , and the size relationship between the sliding speed V and the preset minimum speed Vmin ; when D> Dmin and V> Vmin , start the special effect material.
  • the minimum distance D min and the minimum speed V min are preset as the criteria for judging whether the user's finger slides effectively, and the coordinate displacement D and sliding speed V of the finger on the screen are calculated to determine the user's Whether the finger is really performing a gesture operation to avoid the problem of adding special effects materials by mistake due to the user's mistaken touch on the screen.
  • the sliding gesture when the sliding gesture is a corner sliding gesture, the following steps are performed: preset a starting point area; wherein, the maximum value of the abscissa of the starting point area is X max , and the maximum value of the abscissa is Y max ; according to the coordinate information of the starting point of the finger, it is judged whether the sliding starting point of the finger falls within the starting point area; if the abscissa X s ⁇ X max and the ordinate Y s ⁇ Y max of the sliding starting point of the finger, it is judged as The sliding starting point of the finger is located in the preset starting point area; according to the starting point coordinate information and the ending point coordinate information of the finger, determine the size relationship between the coordinate displacement D of the finger on the screen and the preset minimum distance D min , and the magnitude relationship between the sliding speed V and the preset minimum speed V min ; when D> Dmin , V> Vmin and the range of the slope k of the connecting line between the sliding start point
  • the starting point area and the sliding area are preset, the starting point area is used as the judgment condition for judging whether the user's finger can trigger the sliding gesture recognition, and the sliding area is used as the judgment condition for judging whether the user's finger can trigger the application of special effects material , set two judgment conditions to accurately judge whether the user's finger has a sliding gesture at the corner of the screen and the size of the area where the finger slides.
  • the sliding area set at the four corners of the screen has an edge inclination of 15° Within the range of 75°, when the user's finger performs the corner sliding gesture, it will cover part or all of the sliding area, which can meet the judgment requirements of the corner sliding gesture, and then can adaptively apply special effects materials.
  • the step of applying the special effect material on the screen according to the sliding information of the gesture action includes: dividing the screen into a first area according to the sliding information of the gesture action and the second area; perform portrait segmentation on the portrait displayed on the screen, and replace the part of the background in the first area on the screen with special effects material; wherein, the first area is the same as the gesture action
  • the area of the screen swept by the connecting line between the sliding starting point and the sliding end point and passing through the dividing line of the sliding end point, the second area is the area on the screen except the first area;
  • the sliding information includes the starting point coordinate information of the sliding start point, the end point coordinate information of the sliding end point, the sliding gesture, the sliding direction and the sliding speed.
  • first perform portrait segmentation and screen segmentation extract the portrait, and divide the screen into a first area where the finger slides and a second area that is not slid, and only the background of the first area is segmented.
  • Replacement of special effects material The method can capture the position of the finger in the sliding process in real time, so as to determine the first area that needs to replace the background.
  • the area occupied by the portrait still displays the portrait, and will not be replaced with the background, resulting in the effect that the foreground portrait remains unchanged and the background changes in the rear, so as to realize the real-time capture of the user's sliding gesture and real-time calculation of the user's finger sliding.
  • Area the ability to replace the background of the area where the user's finger slides in real time.
  • the step of dividing the screen into a first area and a second area according to the sliding information of the gesture action includes: starting point coordinate information of the sliding start point of the gesture action and the end point coordinate information of the sliding end point to create the dividing line; according to the dividing line and the sliding direction, divide the area of the screen swept by the dividing line along the sliding direction into the first area .
  • the function of the dividing line used to divide the first area and the second area in the xy coordinate system can be determined through simple operations by using the coordinate information of the starting point of the sliding start point and the coordinate information of the end point of the sliding end point, so that The first area swept by the finger can be determined on the screen.
  • the method occupies less computing resources and has a faster processing speed, and can adapt to the addition of special effects materials in a scene where the sliding speed of the gesture action is faster.
  • the step of performing portrait segmentation on the portrait displayed on the screen, and performing special effect material replacement on the part of the background in the first area on the screen includes: The original image of the portrait is down-sampled using a bilinear interpolation algorithm to obtain a down-sampled image; the outline of the person in the down-sampled image is calculated, and the outline of the person is up-sampled using a bilinear interpolation algorithm to obtain the result. the outline of the original image; segment the portrait from the original image based on the outline of the original image to obtain a first layer; replace the part of the background in the first area with special effects materials to obtain the first layer two images; superimposing the first image and the second image.
  • the original image is down-sampled, thereby greatly reducing the computational and power consumption overhead of portrait segmentation, thereby allowing the deep learning portrait segmentation model algorithm that requires a large amount of computation to run On mobile terminals with limited computing power and power consumption, it can meet the frame rate requirements of video playback.
  • the adding level of the special effect material is selected according to the area of the first area or the screen ratio of the first area to the screen.
  • the user in the process of adding special effect materials, can choose the amount of special effect materials to be added, which is reflected in the operation application, that is, the level of adding special effect materials can be selected according to the size of the area where the user's finger slides across the screen , such an operation method can meet the user's more and more complex special effects adding needs.
  • the addition level of the special effect material is correspondingly increased by one level.
  • the entire background of the screen is replaced with special effects material .
  • the method of this preferred embodiment can facilitate the user's operation and enhance the user experience.
  • the step before the step of dividing the screen into a first area and a second area according to the sliding information of the gesture action, the step includes: judging the gesture action according to the sliding speed Whether the sliding speed is valid; if the sliding speed is greater than the first speed, the background of the screen will be replaced by the overall special effect material; if the sliding speed is less than the second speed, the sliding information according to the gesture action will not be executed, The step of applying special effect material on the screen; wherein, the first speed is greater than the second speed.
  • two judgment conditions for judging the sliding speed are preset.
  • the sliding speed is greater than the first speed, it can be considered that the user needs to replace the entire background, and when the sliding speed is less than the second speed, it can be considered that the user There is no swipe gesture performed, so there is no need to initiate the step of applying the special effect material.
  • the method includes: judging The sliding direction of the gesture action; if the sliding direction is a forward direction away from the starting point of the finger sliding on the screen, a new special effect material is added; if the sliding direction is the sliding direction toward the finger on the screen When the starting point is reversed, the previous special effect material is restored.
  • the user is provided with a backtracking operation by recognizing the sliding direction, so that the user can freely select a preferred special effect material among multiple special effect materials, instead of sliding over the favorite special effect material. And worry, enhance the user experience.
  • the step of applying the special effect material on the screen according to the sliding information of the gesture action includes: according to the sliding information of the gesture action, when the gesture action is on the screen Pull out the control panel along with the sliding direction of the user at the sliding starting point on the control panel, and prevent multiple special effect materials on the control panel; apply the special effect materials on the screen.
  • the user is provided with a control panel with selectivity.
  • the special effect material is replaced, the user's visual experience is better, and the special effect material that better meets the user's needs can be selected.
  • the method before the step of applying the special effect material on the screen according to the sliding information of the gesture action, the method includes: detecting the use frequency of each special effect material, and determining the frequency of use according to the frequency of use. Sort the presentation order of the special effect materials at least.
  • the list order of the user's commonly used special effect materials can be customized according to the user's habits, so that the special effect material can be added to the background of the screen more quickly every time the user uses it, so that the user's use is more convenient and experience. better.
  • the number of special effect materials is less than 10.
  • an upper limit of the number of materials is set for the backup library of special effect materials, and these special effect materials appear cyclically when swiping, so as to avoid that it is difficult for the user to restore the original state after sliding due to too many special effect materials.
  • an embodiment of the present application provides an apparatus for adding special effects to a video.
  • the apparatus includes: an information acquisition module for capturing a user's gesture action and acquiring sliding information of the gesture action; a special effect application module for For the sliding information of the gesture action, the special effect material is applied on the screen.
  • the solution provided by this embodiment is different from the problem in the prior art that the user needs at least two clicks to activate special effects during a video call, which is a cumbersome interaction process.
  • the solution provided by this embodiment uses the information acquisition module.
  • the special effect application module captures and processes the sliding information of the user's gesture action, which can achieve the effect of adding the special effect wherever the user's finger slides.
  • the information acquisition module includes: a recording unit, configured to capture and record in real time the coordinate information of the starting point of the sliding start point and the end point of the sliding end point in the two-dimensional xy coordinate system of the screen by the user's finger Coordinate information; a calculation unit for judging the sliding direction and sliding speed of the finger according to the starting point coordinate information and the end point coordinate information of the finger; the judging unit for judging the sliding gesture adopted by the user according to the sliding direction of the finger .
  • the recording unit is used to record the sliding information generated when the user's finger slides on the screen, and the computing unit determines the addition of special effects materials on the screen by the coordinates of the sliding start point and the sliding end point of the user's finger when the user slides
  • the judging unit judges the sliding trajectory of the user's finger when the user uses different sliding gestures, so as to meet the needs of adding special effects materials in real time.
  • the special effect application module includes: a segmentation unit, for dividing the screen into a first area and a second area according to the sliding information of the gesture action; an application unit for dividing the screen into a first area and a second area; The portrait displayed on the screen is segmented, and the part of the background in the first area on the screen is replaced with special effects material.
  • the segmentation unit first calculates the size of the area that the user's finger slides over, performs screen segmentation, and divides the screen into the first area where the finger slides and the second area that is not slid over, and the application unit first performs portrait Segmentation, and then replacing the background of the first area with special effects materials, realizes the functions of capturing the user's sliding gesture in real time, calculating the area where the user's finger slides in real time, and replacing the background of the area where the user's finger slides in real time.
  • an embodiment of the present application provides a terminal device, including the apparatus for adding video special effects as described in the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, including a program or an instruction, and when the program or the instruction is run on a computer, the method according to the first aspect is executed.
  • the method, device, and terminal device for adding video special effects disclosed in the embodiments of the present application use gesture actions to replace the original method of opening special effects by clicking buttons, which can reduce the complexity of user operations and enable special effects quickly when a video call is incoming. It can perform regional portrait segmentation and background replacement or blurring according to the area or screen ratio that the user's finger slides over to improve the fun, playability and interactivity of the product. It can also downsample the original image before doing portrait segmentation, thereby greatly reducing the computational and power consumption overhead of portrait segmentation, thus allowing the deep learning portrait segmentation model algorithm that requires a large amount of computation to run in computing power and power consumption. On limited mobile terminals, it meets the frame rate requirements for video playback.
  • FIG. 1 is a schematic structural diagram of a terminal device provided in Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of steps of a method for adding special effects to a video provided by Embodiment 2 of the present application;
  • Step 100 is a schematic diagram of the steps of Step 100 in the method for adding special effects to a video provided in Embodiment 2 of the present application;
  • Fig. 4a is a Cartesian coordinate system when a side sliding gesture is used in step Step100 of the video special effect adding method provided in Embodiment 2 of the present application
  • Fig. 4b is a side sliding gesture in step Step100 of the video special effect adding method provided by Embodiment 2 of the present application. Schematic diagram of the operation during gestures;
  • FIG. 5a is a schematic diagram of the starting point area when a corner sliding gesture is used in step Step 100 of the video special effect adding method provided in Embodiment 2 of the present application
  • FIG. 5b is a video special effect adding method provided in Embodiment 2 of the present application.
  • Step 100 adopts corner sliding in step 100 Operation principle diagram during gesture
  • FIG. 5c is a diagram of the effective sliding range when corner sliding gesture is adopted in step Step 100 of the video special effect adding method provided in Embodiment 2 of the present application;
  • Step 200 is a schematic diagram of the steps of Step 200 in the method for adding special effects to a video provided in Embodiment 2 of the present application;
  • Step 7 is a schematic diagram of the steps of Step 210 in the video special effect adding method provided in Embodiment 2 of the present application;
  • Step 8 is a schematic diagram of the steps of Step 220 in the video special effect adding method provided in Embodiment 2 of the present application;
  • FIG. 9 is a schematic diagram of a contour image after being cared for by portrait segmentation in step Step 220 in the video special effect adding method provided in Embodiment 2 of the present application;
  • Step 10 is a schematic diagram of steps before Step 210 in the video special effect adding method provided in Embodiment 2 of the present application;
  • FIG. 11 is a schematic diagram of the operation of switching special effects materials when a user's finger slides multiple times in the method for adding special effects to a video provided in Embodiment 2 of the present application;
  • Step 12 is a schematic diagram of steps of another implementation of Step 200 in the method for adding special effects to a video provided in Embodiment 2 of the present application;
  • FIG. 13a and 13b are operational schematic diagrams of another implementation manner of Step 200 in the video special effect adding method provided in Embodiment 2 of the present application;
  • Step 14 is a schematic diagram of steps before Step 200 in the video special effect adding method provided in Embodiment 2 of the present application;
  • FIG. 15 is a schematic diagram of a module of a device for adding special effects to a video provided by Embodiment 3 of the present application;
  • FIG. 16 is a schematic diagram of a module of an information acquisition module in the video special effect adding device provided in Embodiment 3 of the present application;
  • FIG. 17 is a schematic block diagram of a special effect application module in the video special effect adding apparatus provided in Embodiment 3 of the present application.
  • the terminal device may be a mobile phone (also known as a smart terminal device), a tablet (personal computer), a personal digital assistant (personal digital assistant), an e-book Reader (e-book reader) or virtual reality interactive device (virtual reality interactive device), etc.
  • the terminal device can be connected to various types of communication systems, such as: long term evolution (long term evolution, LTE) system, future The fifth generation (5th Generation, 5G) system, a new generation of wireless access technology (new radio access technology, NR), and future communication systems, such as 6G systems; can also be wireless local area networks (wireless local area networks, WLAN), etc.
  • LTE long term evolution
  • 5G fifth generation
  • 5G new generation of wireless access technology
  • 6G systems can also be wireless local area networks (wireless local area networks, WLAN), etc.
  • an intelligent terminal device is used as an example for description.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus) , USB) interface 130, charging management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the terminal device 100 .
  • the terminal device 100 may include more or less components than those shown in the drawings, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor ( image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in the processor 110 is a cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver (universal asynchronous receiver/transmitter) transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or general-purpose Serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous receiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the terminal device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the shooting function of the terminal device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the terminal device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transmit data between the terminal device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones. This interface can also be used to connect other terminal devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the terminal device 100 .
  • the terminal device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the terminal device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in terminal device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the terminal device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the terminal device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves
  • the antenna 1 of the terminal device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou satellite navigation system (beidounavigation satellite system, BDS), a quasi-zenith satellite system (quasi- zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou satellite navigation system
  • BDS Beidounavigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc., wherein the display screen 194 includes a display panel, and the display screen may specifically include a folding screen, a special-shaped screen, etc.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (organic light-emitting diode, OLED), active matrix organic light emitting diode or active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light emitting diode (flex light-emitting diode, FLED) ), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (quantum dot light emitting diodes, QLED), etc.
  • the terminal device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the terminal device 100 can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the terminal device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the terminal device 100 may support one or more video codecs.
  • the terminal device 100 can play or record videos in various encoding formats, for example, moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the terminal device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store the operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the terminal device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the terminal device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the terminal device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In one embodiment, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the terminal device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the terminal device 100 answers a call or a voice message, the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the terminal device 100 may be provided with at least one microphone 170C.
  • the terminal device 100 may be provided with two microphones 170C, which may implement a noise reduction function in addition to collecting sound signals.
  • the terminal device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the terminal device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the terminal device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the terminal device 100 .
  • the angular velocity of the terminal device 100 about three axes ie, the x, y and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the terminal device 100, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to offset the shaking of the terminal device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the terminal device 100 calculates the altitude by using the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the terminal device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the terminal device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the terminal device 100 is stationary. It can also be used to identify the posture of terminal devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the terminal device 100 can measure the distance through infrared or laser. In one embodiment, when shooting a scene, the terminal device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the terminal device 100 emits infrared light to the outside through the light emitting diode.
  • the terminal device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100 . When insufficient reflected light is detected, the terminal device 100 may determine that there is no object near the terminal device 100 .
  • the terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the terminal device 100 uses the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the terminal device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the terminal device 100 when the temperature is lower than another threshold, the terminal device 100 heats the battery 142 to avoid abnormal shutdown of the terminal device 100 caused by the low temperature.
  • the terminal device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output associated with touch operations may be provided via display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the terminal device 100 , which is different from the position where the display screen 194 is located.
  • the touch screen composed of the touch sensor 180K and the display screen 194 may be located in the side area or the folded area of the terminal device 100 to determine the position touched by the user when the user's hand touches the touch screen and touch gestures; for example, when the user holds the terminal device, he can click any position on the touch screen with his thumb, then the touch sensor 180K can detect the user's click operation, and transmit the click operation to the processor, The processor determines, according to the click operation, that the click operation is used to wake up the screen.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, and combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the terminal device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the terminal device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the terminal device 100 interacts with the network through the SIM card to realize functions such as calls and data communication.
  • the terminal device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal device 100 and cannot be separated from the terminal device 100 .
  • the touch display screen of the terminal device may include multiple touch display areas.
  • the folding screen of the terminal device includes a folding area in a folded state, and the folding area can also realize touch control. response.
  • the operation of a terminal device on a specific touch display area is relatively limited, and there is no relevant operation specifically for a specific touch display area. Based on this, an embodiment of the present application provides a gesture interaction method.
  • the terminal device can obtain the input event of the touch response area, and in response to the input event, trigger the terminal device to execute the input event
  • the corresponding operation instructions are used to implement gesture operations on the side area or the folded area of the terminal device to improve the control experience of the terminal device.
  • the memory is used to store a computer program
  • the processor is used to execute the computer program stored in the memory, so that the terminal device executes the method described in Embodiment 2 of the present application.
  • Embodiment 2 of the present application discloses a method for adding special effects to a video, which can be applied to an incoming call state, and the method includes the following steps:
  • Step100 Capture the user's gesture action, and obtain the sliding information of the gesture action
  • Step 200 Apply the special effect material to the screen according to the sliding information of the gesture action.
  • the method for adding video special effects in this embodiment is different from the problem in the prior art that the user needs at least two clicks to activate special effects during a video call, and the interaction process is cumbersome.
  • the sliding information of the gesture action is captured and processed, which can achieve the effect of adding special effects wherever the user's finger slides.
  • Step 100 capturing a user's gesture action, and acquiring the sliding information of the gesture action, including:
  • Step110 Capture and record in real time the coordinate information of the starting point of the sliding start point and the coordinate information of the end point of the sliding end point in the two-dimensional xy coordinate system of the screen by the user's finger;
  • Step120 According to the starting point coordinate information and the ending point coordinate information of the finger, determine the sliding direction and sliding speed of the finger;
  • Step 130 Determine the sliding gesture adopted by the user according to the sliding direction of the finger.
  • the sliding information includes the starting point coordinate information of the sliding start point, the end point coordinate information of the sliding end point, the sliding gesture, the sliding direction and the sliding speed.
  • the coordinate information of the starting point of the sliding start and the coordinate information of the end point of the sliding end provide the basis for judging the sliding gesture, the sliding direction and the sliding speed.
  • Gestures provide the way that the user's finger slides on the screen, the sliding direction provides the path that the user's finger slides on the screen, and the sliding speed provides the speed of the user's finger sliding on the screen. Combining these sliding information can accurately know the needs of the screen. The area to replace the special effect material.
  • the user's sliding gestures are divided into side sliding gestures and corner sliding gestures.
  • the side sliding gesture is to slide horizontally or vertically from the four sides of the mobile phone to the center of the screen;
  • the corner sliding gesture is a corner sliding gesture.
  • the center of the screen slides.
  • swipe gesture is an edge swipe gesture
  • Step140 According to the coordinate information of the starting point and the coordinate information of the end point of the finger, determine the size relationship between the coordinate displacement D of the finger on the screen and the preset minimum distance D min , and the size relationship between the sliding speed V and the preset minimum speed V min ;
  • step Step180 is executed.
  • the core idea is to capture and record the movement trajectory and speed of the user's finger on the screen at the coordinate position (x, y) of the two-dimensional coordinate axis in real time (as shown in Figure 4a to Figure 4b).
  • the abscissa X s of the sliding starting point is 0 (preferably, a fault-tolerant space of 10 pixels can be reserved, and the specific value of the fault-tolerant space is determined according to actual needs), and the ordinate Y s is the range of the middle point of the screen ( The maximum length of the vertical axis is Y max +200 pixels); the abscissa X e of the sliding end point is greater than X s , and the coordinate displacement
  • the above determination process is to determine the lateral pulling action of the finger shown in FIG. 4a to FIG. 4b; based on the above idea, the area of the area where the user is known to pull laterally can be expanded, so as to facilitate the subsequent steps of applying special effects materials.
  • swipe gesture action may be the default Home button (start button) function of the system, so in this scenario, attention should be paid to system-level gesture monitoring and capture.
  • the minimum distance D min and the minimum speed V min are preset as the criteria for judging whether the user's finger slides effectively, and the user is judged by calculating the coordinate displacement D and sliding speed V of the finger on the screen. Whether the finger of the user is really performing the gesture operation, to avoid the problem of adding special effects material by mistake due to the user's mistaken touch on the screen.
  • swipe gesture Take the upper right corner of the screen moving toward the center of the screen as an example.
  • swipe gesture is an edge swipe gesture, perform the following steps:
  • Step150 preset a starting point area; wherein, the maximum value of the abscissa of the starting point area is X max , and the maximum value of the abscissa is Y max ;
  • Step160 According to the coordinate information of the starting point of the finger, determine whether the sliding starting point of the finger falls within the starting point area;
  • step Step170 If the abscissa X s ⁇ X max and the ordinate Y s ⁇ Y max of the sliding starting point of the finger, it is determined that the sliding starting point of the finger is located within the preset starting point area; thus step Step170 is executed.
  • Step170 According to the coordinate information of the starting point of the finger and the coordinate information of the end point, determine the relationship between the coordinate displacement D of the finger on the screen and the preset minimum distance D min , and the size of the sliding speed V and the preset minimum speed V min relation;
  • Step 180 Start the special effect material.
  • the trigger condition for the recognition of its angular centripetal movement action is that the starting point O must be within the starting point area of the box shape in Fig. 5b, and 50 is a reference value, which can be determined according to Adjust the size of the starting point area according to actual needs or screen sensitivity.
  • Angular centripetal sliding action recognition is a judgment rule obtained after appropriate optimization based on the judgment conditions of the side sliding gesture shown in Figure 4a to Figure 4c. Taking the point A in Figure 5c as the end point as an example, the A coordinate is set as (X (A ), Y(A)), and the starting point is (X(O), Y(O)).
  • the judgment rules are as follows:
  • the slope k of the straight line AO is in the range of (0.268, 3.73), that is, the angle corresponding to the movement offset is between 15° and 75°, as shown in the OAB triangle area in Figure 4c. (The scope of the triangle area can be appropriately adjusted according to actual needs).
  • Sliding area recognition ie, isosceles right-angled triangle area recognition: Based on the sliding end point of finger sliding, the following processing is performed: Take point A in Figure 5c as an example, set A as the sliding end point of the user's sliding, and make a line through point A with a slope of -1 , then the triangle area enclosed by the line and the coordinate axis is the sliding area, because the slope of the line is -1, so the interior angle with the coordinate axis is 45°, as shown in Figure 5c, the isosceles right triangle , in preparation for the background replacement of the triangle area in the figure.
  • Figure 5c only exemplifies the situation of one corner of the terminal device, the judgment rules at the other three corners can be inferred from the situation exemplified in Figure 5c, and the final judgment rule is 0.268 ⁇
  • the starting point area and the sliding area are preset, the starting point area is used as the judgment condition for judging whether the user's finger can trigger the recognition of the sliding gesture, and the sliding area is used as the judgment for judging whether the user's finger can trigger the application of the special effect material.
  • Condition set two judgment conditions to accurately judge whether the user's finger has a sliding gesture at the corner of the screen and the size of the area where the finger slides.
  • the sliding area set at the four corners of the screen has an edge inclination of 15 In the range from ° to 75°, when the user's finger performs the corner sliding gesture, it will cover part or all of the sliding area, which can meet the judgment requirements of the corner sliding gesture, and then can adaptively apply special effects materials.
  • step Step 200 applying the special effect material to the screen according to the sliding information of the gesture action, including:
  • Step210 According to the sliding information of the gesture action, the screen is divided into a first area and a second area;
  • Step 220 Segment the portrait displayed on the screen, and replace the part of the background in the first area on the screen with special effects material.
  • the first area is the area of the screen that is perpendicular to the connecting line between the sliding start point and the sliding end point of the gesture action and passes through the dividing line of the sliding end point
  • the second area is the area on the screen except the first area;
  • the sliding information includes the starting point coordinate information of the sliding start point, the end point coordinate information of the sliding end point, the sliding gesture, the sliding direction and the sliding speed.
  • portrait segmentation and screen segmentation are performed first, the portrait is extracted, and the screen is divided into a first area that the finger slides over and a second area that is not slid over, and only the background of the first area is Replace the effect material.
  • the method can capture the position of the finger in the sliding process in real time, so as to determine the first area that needs to replace the background.
  • the area occupied by the portrait still displays the portrait, and will not be replaced with the background, resulting in the effect that the portrait in the front remains unchanged, and the background changes in the rear, so as to realize the real-time capture of the user's sliding gesture and real-time calculation of the user's finger sliding.
  • Area the ability to replace the background of the area where the user's finger slides in real time.
  • Step 210 according to the sliding information of the gesture action, divide the screen into a first area and a second area, including:
  • Step211 Create a dividing line according to the coordinate information of the starting point of the sliding start point of the gesture action and the coordinate information of the end point of the sliding end point;
  • Step 212 According to the dividing line and the sliding direction, divide the area of the screen swept by the dividing line along the sliding direction into a first area.
  • the size of the area that the user's finger slides over is calculated first, and then the screen is divided, and the screen is divided into a first area that the finger slides over and a second area that does not slide over.
  • the background of the area is replaced with special effects material, which realizes the functions of capturing the user's sliding gesture in real time, calculating the area where the user's finger slides in real time, and replacing the background of the area where the user's finger slides in real time.
  • Step 220 perform portrait segmentation on the portrait displayed on the screen, and replace the part of the screen where the background is located in the first area with special effects material, including:
  • Step 221 down-sampling the original image of the portrait using a bilinear interpolation algorithm to obtain a down-sampled image
  • Step222 Calculate the outline of the person in the down-sampled image (Portrait Mask), and use the bilinear interpolation algorithm to upsample the outline of the person to obtain the outline of the original image;
  • Step223 Segment the portrait from the original image based on the contour of the original image to obtain the first layer
  • Step224 Replace the part of the background in the first area with special effects material to obtain the second layer;
  • Step225 Overlay the first layer with the second layer.
  • the frame data format can be Common formats such as YUV-NV21 or RGB.
  • a bilinear interpolation algorithm for example, downsample the original 1920 x 1080 resolution image to a 320 x 180 resolution image to obtain a downsampled image;
  • the deep learning model used for portrait segmentation here is not limited in the embodiments of the present invention, and common models such as CNN, FCN/FCN+/UNet, etc. may be used.
  • the data with the outline (Mask) of the person will be output, which is essentially a frame of image, as shown in Figure 9, but it should be noted that the resolution of the outline is temporarily 320x180. contour, which greatly reduces the computational complexity and power consumption of portrait segmentation.
  • the portrait is "keyed out” from the original image to obtain the first layer, which is then layered and rendered on the GPU with the second layer replaced by the special effect material of the background, and finally the background replacement is obtained. After effects.
  • the bilinear interpolation method is used to downsample the original image frame by frame, so that the resolution is proportionally reduced, and the computing power consumption is reduced, and then the portrait is segmented, the outline of the portrait is output, and then the double Linear interpolation upsamples the contour to the resolution of the original image.
  • the power consumption and processing delay of portrait segmentation on mobile terminal devices are greatly reduced, and the frame rate requirements of 30FPS video calls are met.
  • This method down-samples the original image before doing portrait segmentation, thereby greatly reducing the computational and power consumption overhead of portrait segmentation, thus allowing the deep learning portrait segmentation model algorithm that requires a large amount of computation to run in computing power and power consumption. On limited mobile terminals, it meets the frame rate requirements for video playback.
  • the adding level of the special effect material is selected according to the area of the first area or the screen ratio of the first area on the screen.
  • the user in the process of adding special effect materials, the user can choose the amount of special effect materials to be added.
  • the user In the operation application, the user can select the addition of special effect materials according to the size of the area where the user's finger slides across the screen. This method of operation can meet the needs of users for more and more complex special effects additions.
  • the addition level of the special effect material is correspondingly increased by one level for every 10% of the gesture action.
  • the method for adding video special effects in this embodiment can provide users with a smoother experience of adding video special effects.
  • the beautification algorithm can be triggered by the capture method of the side sliding gesture described in Figures 4a to 4c along the horizontal/vertical direction of the screen, and the ratio of the sliding area to the entire screen is used as the beauty level.
  • the ten-level beauty as an example: for example, start sliding from the left or the top, when the sliding area accounts for 10%, the first-level beauty is turned on, when all the slides to the right, the ten-level beauty is turned on, and so on.
  • edge swipe gesture capture trigger method Since beauty is a level-implemented special effect, it is recommended to use the edge swipe gesture capture trigger method, but if there are factors such as business requirements or product design, the corner swipe gesture capture method can also be used to trigger the trigger.
  • the corner swipe gesture capture method can also be used to trigger the trigger. The two implementation ideas are similar.
  • the method of this preferred embodiment can facilitate user operations and enhance user experience.
  • step Step210 before the step of dividing the screen into a first area and a second area according to the sliding information of the gesture action, it includes:
  • Step201 Determine whether the sliding of the gesture action is valid according to the sliding speed
  • step 202 replace the background of the screen with the overall special effect material
  • step Step 200 is not executed; wherein, the first speed is greater than the second speed.
  • two judgment conditions for judging the sliding speed are preset, and whether the current sliding is effective is judged in combination with the sliding speed of the user. If the sliding speed is fast and the sliding speed is greater than the first speed, it can be considered that the user If the entire background needs to be replaced, the complete replacement or blurring of the background is enabled. When the sliding speed is less than the second speed, it can be considered that the user has not performed the sliding gesture, and the step of applying the special effect material does not need to be started.
  • Step 220 segmenting the portrait displayed on the screen, and replacing the part of the screen with the background in the first area after the special effect material is replaced, including:
  • Step230 Determine the sliding direction of the gesture action
  • step 240 is performed: adding a new special effect material
  • step 250 the last special effect material is restored.
  • the special effect material can be switched by sliding multiple times.
  • the method for adding video special effects in this embodiment provides the user with a retrospective operation by recognizing the sliding direction, and can cancel the background replacement/blur of the screen, for example, slide your finger to the left to enable background replacement/blur; Swipe left to change to another background material; swipe right (that is, reverse operation) to restore the previous background material/real background.
  • the user can freely select the preferred special effect material from among multiple special effect materials, without worrying about sliding over the favorite special effect material, which enhances the user experience.
  • Step 200 applies the special effect material to the screen according to the sliding information of the gesture action, including:
  • Step210' According to the sliding information of the gesture action, pull out the control panel along with the sliding direction of the user at the sliding starting point of the gesture action on the screen, and prevent multiple special effects materials on the control panel;
  • Step220 Apply the special effect material to the screen.
  • the purpose of the method for adding video special effects in this embodiment is to realize subregional background replacement/blurring. It is not limited to triggering by the above-mentioned capture methods of edge sliding gestures and corner sliding gestures.
  • the core of this method is to use sliding gestures to quickly activate special effects.
  • the area of the sliding area of edge sliding/corner sliding can be used as the range of background replacement/blurring, providing users with a selective control panel.
  • users have a better visual experience and can choose Special effects materials that are more in line with user needs.
  • Step 200 before applying the special effect material on the screen according to the sliding information of the gesture action, the method further includes:
  • Step200' Detect the frequency of use of each special effect material, and sort the presentation order of the special effect material according to the frequency of use.
  • the method for adding video special effects in this embodiment can customize the list order of the user's commonly used special effect materials according to the user's habits, for example, the last used special effect materials are displayed first, or they are sorted according to the frequency of use, so that when the user uses each time It can add special effects materials to the background of the screen more quickly, making the user's use more convenient and the experience better.
  • the number of special effect materials is less than 10.
  • the video special effect adding method of this embodiment sets the upper limit of the number of materials for the backup library of special effect materials, and these special effect materials appear cyclically when swiping, so as to avoid that it is difficult for the user to restore the original state after sliding due to too many special effect materials.
  • a sliding gesture is used to replace the original interaction design of opening special effects by clicking a button.
  • the area or screen ratio of the area where the user's finger slides on the screen perform regional portrait segmentation and background replacement/blur to realize regional background replacement/blur.
  • the area can be divided according to the position of the user's finger on the screen and the angle of the swipe gesture, and part of the video stream of the replaced area is not transmitted, thus saving the time of the video stream. data transmission.
  • the beauty level can be adjusted based on the area of the area where the user's finger slides/screen ratio. It is also possible to switch the material, beauty material, 3D-Animoji or sticker material with the complete user's finger sliding motion as the background replacement.
  • this method also uses the capture and analysis method of angular centripetal swipe gesture.
  • FIG. 15 is a video special effect adding device provided by Embodiment 3 of the present application, and the device includes:
  • the information acquisition module 10 is used to capture the gesture action of the user and acquire the sliding information of the gesture action;
  • the special effect application module 20 is configured to apply the special effect material to the area where the gesture action slides on the screen according to the sliding information of the gesture action.
  • the device for adding video special effects in this embodiment is different from the problem in the prior art that the user needs at least two clicks to activate special effects during a video call, and the interaction process is cumbersome.
  • the solution provided in this embodiment is obtained by using information
  • the module 10 and the special effect application module 20 capture and process the sliding information of the user's gesture, so as to achieve the effect of adding the special effect wherever the user's finger slides.
  • the information acquisition module 10 includes:
  • the recording unit 11 is used to capture and record the starting point coordinate information of the user's finger and the end point coordinate information of the sliding end point in the two-dimensional xy coordinate system of the screen in real time;
  • the calculation unit 12 is used to judge the sliding direction and sliding speed of the finger according to the starting point coordinate information and the ending point coordinate information of the finger;
  • the determining unit 13 is configured to determine the sliding gesture adopted by the user according to the sliding direction of the finger.
  • the recording unit 11 is used to record the sliding information generated when the user's finger slides on the screen, and the calculation unit 12 determines the coordinates of the sliding starting point and the sliding end point of the user's finger when the user slides on the screen.
  • the judgment unit 13 respectively judges the sliding trajectories of the user's fingers when the user uses different sliding gestures, so as to meet the requirement of adding special effect materials in real time.
  • the special effect application module 2 includes:
  • the dividing unit 21 is used for dividing the screen into a first area and a second area according to the sliding information of the gesture action;
  • the application unit 22 is configured to perform portrait segmentation on the portrait displayed on the screen, and perform special effect material replacement on the part of the background located in the first area on the screen.
  • the segmentation unit 21 first calculates the size of the area that the user's finger slides over, performs screen segmentation, and divides the screen into a first area that the finger slides over and a second area that is not slid over, and the application unit 22 First perform portrait segmentation, and then replace the background of the first area with special effects materials, realizing the functions of capturing the user's sliding gesture in real time, calculating the area where the user's finger slides in real time, and replacing the background of the area where the user's finger slides in real time.
  • Embodiment 4 of the present application provides a terminal device, including the apparatus for adding video special effects as described in Embodiment 2 of the present application.
  • Embodiment 5 of the present application provides a computer-readable storage medium, including a program or an instruction, and when the program or instruction is run on a computer, the method described in Embodiment 1 of the present application is executed.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, optical fiber, Digital Subscriber Line, DSL) or wireless (eg infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, high-density digital video discs (DVDs)), or semiconductor media (eg, solid state disks, SSD)) etc.
  • the method, device, and terminal device for adding video special effects disclosed in the embodiments of the present application use gesture actions to replace the original method of opening special effects by clicking buttons, which can reduce the complexity of user operations and enable special effects quickly when a video call is incoming. It can perform regional portrait segmentation and background replacement or blurring according to the area or screen ratio that the user's finger slides over to improve the fun, playability and interactivity of the product. It can also downsample the original image before doing portrait segmentation, thereby greatly reducing the computational and power consumption overhead of portrait segmentation, thus allowing the deep learning portrait segmentation model algorithm that requires a large amount of computation to run in computing power and power consumption. On limited mobile terminals, it meets the frame rate requirements for video playback.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Un procédé et un appareil d'ajout d'effet spécial dans une vidéo et un dispositif terminal permettent d'activer l'effet spécial en remplaçant un clic par une action gestuelle, de manière à réduire la complexité des opérations d'utilisateur et d'activer rapidement l'effet spécial pendant un appel vidéo. Une segmentation de portrait régionale et un floutage ou un remplacement d'arrière-plan sont mis en œuvre selon une zone de région dans laquelle un doigt d'utilisateur glisse ou un format de l'image, de manière à améliorer l'intéressement, la jouabilité et l'interactivité du produit. Avant la segmentation de portrait, un traitement de sous-échantillonnage est effectué sur une image d'origine, de manière à réduire considérablement le calcul de segmentation de portrait et la consommation d'énergie, de telle sorte qu'un algorithme de modèle de segmentation de portrait d'apprentissage profond nécessitant un grand nombre de calculs peut être exécuté dans un terminal mobile de puissance de calcul et de consommation d'énergie limitées, ce qui permet de satisfaire aux exigences de fréquence d'image pour une lecture vidéo.
PCT/CN2021/118451 2020-09-25 2021-09-15 Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal WO2022062985A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011021330.0 2020-09-25
CN202011021330.0A CN114257775B (zh) 2020-09-25 2020-09-25 视频特效添加方法、装置及终端设备

Publications (1)

Publication Number Publication Date
WO2022062985A1 true WO2022062985A1 (fr) 2022-03-31

Family

ID=80790250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118451 WO2022062985A1 (fr) 2020-09-25 2021-09-15 Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal

Country Status (2)

Country Link
CN (2) CN114257775B (fr)
WO (1) WO2022062985A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115480651A (zh) * 2022-11-04 2022-12-16 深圳润方创新技术有限公司 具有临摹内容分析功能电子画板的控制方法及电子画板

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141327A1 (en) * 2011-12-05 2013-06-06 Wistron Corp. Gesture input method and system
CN105808145A (zh) * 2016-03-28 2016-07-27 努比亚技术有限公司 一种实现图像处理的方法及终端
CN106385591A (zh) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 视频处理方法及视频处理装置
CN106951090A (zh) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 图片处理方法及装置
CN107340964A (zh) * 2017-06-02 2017-11-10 武汉斗鱼网络科技有限公司 一种视图的动画效果实现方法及装置
CN108984094A (zh) * 2018-06-29 2018-12-11 北京微播视界科技有限公司 切换全局特效的方法、装置、终端设备及存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780093B (zh) * 2014-01-15 2018-05-01 阿里巴巴集团控股有限公司 即时通讯过程中的表情信息处理方法及装置
CN104866755B (zh) * 2015-06-11 2018-03-30 北京金山安全软件有限公司 应用程序解锁界面背景图片的设置方法、装置及电子设备
CN105892898A (zh) * 2015-11-20 2016-08-24 乐视移动智能信息技术(北京)有限公司 通知中心呼出方法、装置及系统
CN106020664B (zh) * 2016-05-11 2019-07-09 广东合晟网络科技有限公司 图像处理方法
CN109391792B (zh) * 2017-08-03 2021-10-29 腾讯科技(深圳)有限公司 视频通信的方法、装置、终端及计算机可读存储介质
CN108022279B (zh) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 视频特效添加方法、装置及智能移动终端
CN107948667B (zh) * 2017-12-05 2020-06-30 广州酷狗计算机科技有限公司 在直播视频中添加显示特效的方法和装置
CN109089059A (zh) * 2018-10-19 2018-12-25 北京微播视界科技有限公司 视频生成的方法、装置、电子设备及计算机存储介质
US11218646B2 (en) * 2018-10-29 2022-01-04 Henry M. Pena Real time video special effects system and method
US10388322B1 (en) * 2018-10-29 2019-08-20 Henry M. Pena Real time video special effects system and method
CN110944230B (zh) * 2019-11-21 2021-09-10 北京达佳互联信息技术有限公司 视频特效的添加方法、装置、电子设备及存储介质
CN111050203B (zh) * 2019-12-06 2022-06-14 腾讯科技(深圳)有限公司 一种视频处理方法、装置、视频处理设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141327A1 (en) * 2011-12-05 2013-06-06 Wistron Corp. Gesture input method and system
CN105808145A (zh) * 2016-03-28 2016-07-27 努比亚技术有限公司 一种实现图像处理的方法及终端
CN106385591A (zh) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 视频处理方法及视频处理装置
CN106951090A (zh) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 图片处理方法及装置
CN107340964A (zh) * 2017-06-02 2017-11-10 武汉斗鱼网络科技有限公司 一种视图的动画效果实现方法及装置
CN108984094A (zh) * 2018-06-29 2018-12-11 北京微播视界科技有限公司 切换全局特效的方法、装置、终端设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115480651A (zh) * 2022-11-04 2022-12-16 深圳润方创新技术有限公司 具有临摹内容分析功能电子画板的控制方法及电子画板

Also Published As

Publication number Publication date
CN116437034A (zh) 2023-07-14
CN114257775A (zh) 2022-03-29
CN114257775B (zh) 2023-04-07

Similar Documents

Publication Publication Date Title
US11785329B2 (en) Camera switching method for terminal, and terminal
CN114816210B (zh) 一种移动终端的全屏显示方法及设备
WO2021017889A1 (fr) Procédé d'affichage d'appel vidéo appliqué à un dispositif électronique et appareil associé
WO2021213120A1 (fr) Procédé et appareil de projection d'écran et dispositif électronique
WO2021000881A1 (fr) Procédé de division d'écran et dispositif électronique
US20230046708A1 (en) Application Interface Interaction Method, Electronic Device, and Computer-Readable Storage Medium
EP4325879A1 (fr) Procédé permettant d'afficher une image dans une scène photographique et dispositif électronique
WO2021052214A1 (fr) Procédé et appareil d'interaction par geste de la main et dispositif terminal
CN111010506A (zh) 一种拍摄方法及电子设备
EP4050883A1 (fr) Procédé de photographie et dispositif électronique
WO2021036770A1 (fr) Procédé de traitement d'écran partagé et dispositif terminal
WO2021180089A1 (fr) Procédé et appareil de commutation d'interface et dispositif électronique
CN110559645B (zh) 一种应用的运行方法及电子设备
WO2021052407A1 (fr) Procédé de commande de dispositif électronique et dispositif électronique
WO2022001619A1 (fr) Procédé de capture d'écran et dispositif électronique
WO2022001258A1 (fr) Procédé et appareil d'affichage à écrans multiples, dispositif terminal et support de stockage
CN113935898A (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
WO2021042878A1 (fr) Procédé photographique et dispositif électronique
WO2022062985A1 (fr) Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal
CN114089902A (zh) 手势交互方法、装置及终端设备
WO2022078116A1 (fr) Procédé de génération d'image à effet de pinceau, procédé et dispositif d'édition d'image et support de stockage
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
WO2022252786A1 (fr) Procédé d'affichage d'écran divisé en fenêtres et dispositif électronique
CN114579900A (zh) 跨设备的页面切换方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21871356

Country of ref document: EP

Kind code of ref document: A1