CN114390341B - Video recording method, electronic equipment, storage medium and chip - Google Patents

Video recording method, electronic equipment, storage medium and chip Download PDF

Info

Publication number
CN114390341B
CN114390341B CN202110221755.4A CN202110221755A CN114390341B CN 114390341 B CN114390341 B CN 114390341B CN 202110221755 A CN202110221755 A CN 202110221755A CN 114390341 B CN114390341 B CN 114390341B
Authority
CN
China
Prior art keywords
video
keyword
audio
recorded
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110221755.4A
Other languages
Chinese (zh)
Other versions
CN114390341A (en
Inventor
张亚运
祝炎明
陈蔚
胡德启
谢小灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114390341A publication Critical patent/CN114390341A/en
Application granted granted Critical
Publication of CN114390341B publication Critical patent/CN114390341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The application relates to the technical field of terminals, and discloses a video recording method and electronic equipment, which are used for reducing the manufacturing threshold of small videos and improving the manufacturing efficiency of the small videos. The method comprises the following steps: the electronic equipment detects a first operation for starting video recording; the electronic equipment responds to the first operation and starts video recording; when a first keyword in a first word stock is detected in recorded audio, the electronic device replaces a first number of video frames recorded after the first keyword is detected with pictures or animations associated with the first keyword.

Description

Video recording method, electronic equipment, storage medium and chip
Cross Reference to Related Applications
The present application claims priority from chinese patent application entitled "a video recording method and electronic device" filed on 22 th 10 th 2020, to the intellectual property office of the people's republic of China, application number 202011142469.0, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a video recording method and electronic equipment.
Background
With the rise of new media platforms such as media, small video sharing has become a new social leisure mode. At present, the small video is required to be produced in a large amount, various pictures, animations and the like are required to be inserted in the production process of the small video, but the pictures, the animations and the like are usually realized by editing the recorded video file in a later stage through video processing software by editors after the recording of the small video is completed. And the video file is edited in the later stage through the video processing software, so that the use threshold is high, the workload is large, the small video is not convenient and efficient to manufacture, and the user experience is affected.
Disclosure of Invention
The embodiment of the application provides a video recording method and electronic equipment, which are used for solving the problems of high threshold, insufficient convenience and high efficiency of small video production.
In a first aspect, an embodiment of the present application provides a video recording method, including: the electronic equipment detects a first operation for starting video recording; the electronic equipment responds to the first operation and starts video recording; when a first keyword in a first word stock is detected in recorded audio, the electronic device replaces a first number of video frames recorded after the first keyword is detected with pictures or animations associated with the first keyword.
In the embodiment of the application, when the electronic device records the video, whether the first keyword exists in the recorded audio can be detected in real time, and when the first keyword is detected in the recorded audio, the first number of video frames recorded after the first keyword is detected are replaced by the pictures or the animations associated with the first keyword, so that the video is recorded while being edited, the user operation is not needed, the manufacturing threshold of the small video is reduced, and the manufacturing efficiency and the user experience of the small video are improved.
In one possible design, the replacing the first number of video frames with the first keyword-associated picture or animation includes: the electronic equipment detects a second operation for replacing the picture or animation associated with the first keyword with the picture or animation provided by the user; and the electronic equipment responds to the second operation, and replaces the picture or animation associated with the first keyword with the picture or animation provided by the user.
In the design, the electronic equipment can insert corresponding pictures or animations in the video according to the operation of the user, so that the personalized requirements of the user can be met, and the user experience is improved.
In one possible design, the method further comprises: the electronic device detecting a third operation for adjusting a start position or an end position of the picture or the animation in the recorded video; and the electronic equipment responds to the third operation and adjusts the starting position or the ending position of the picture or the animation in the recorded video.
In the design, the user can directly adjust the starting position or the ending position of the inserted picture or animation in the recorded video, so that the threshold for editing the video by the user is reduced, and the small video manufacturing efficiency is improved.
In one possible design, the method further comprises: when a second keyword in a second word stock is detected in the recorded audio, the electronic device replaces a second number of audio frames recorded after the second keyword is detected with the audio special effects associated with the second keyword or replaces a third number of audio frames containing the second keyword with the audio special effects associated with the second keyword.
In the design, when the video is recorded, the electronic equipment can automatically add the audio special effect, thereby being beneficial to reducing the manufacturing threshold of the small video and improving the manufacturing efficiency of the small video.
In one possible design, the method further comprises: when a key gesture in a gesture library is detected in recorded videos, the electronic device adds the video special effect associated with the key gesture in a fourth number of video frames which start with the video frame in which the key gesture is detected and contain the key gesture.
In the design, when the video is recorded, the electronic equipment can automatically add the special effect of the video, thereby being beneficial to reducing the manufacturing threshold of the small video and improving the manufacturing efficiency of the small video.
In one possible design, the method further comprises: the electronic equipment detects a fourth operation for starting the flash; and the electronic equipment responds to the fourth operation, and discards video frames and audio frames in the recorded video and audio according to the quick record multiple interval.
In one possible design, the method further comprises: the electronic equipment detects a fifth operation for starting the slow recording; and the electronic equipment responds to the fifth operation, and adjusts the switching rate of video frames and audio frames in the recorded video and audio according to the slow recording multiple.
In a second aspect, embodiments of the present application provide an electronic device comprising modules/units performing the method of the first aspect or any one of the possible designs of the first aspect; these modules/units may be implemented by hardware, or may be implemented by hardware executing corresponding software.
In a third aspect, embodiments of the present application provide an electronic device including a memory and a processor, the memory having a computer program stored therein; the computer program, when executed by the processor, causes the electronic device to perform the method of any one of the possible designs of the first aspect or the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the method of the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product which, when run on an electronic device, causes the electronic device to perform the method of the first aspect or any one of the possible designs of the first aspect.
In a sixth aspect, an embodiment of the present application provides a chip, configured to call from a memory and execute a computer program stored in the memory, and perform the method of the first aspect or any one of the possible designs of the first aspect.
The technical effects achieved by the second aspect to the sixth aspect are referred to the technical effects achieved by the first aspect, and the detailed description is not repeated here.
Drawings
Fig. 1 is a schematic diagram of video recording provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interface provided in an embodiment of the present application;
fig. 4 is a schematic diagram of video recording effect provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of keyword detection provided in an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a process of inserting pictures or animations in a video recording process according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a video editing interface according to an embodiment of the present disclosure;
FIG. 8 is a second schematic diagram of video recording effects according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a process of inserting audio special effects in a video recording process according to an embodiment of the present application;
fig. 10 is a schematic diagram of a process of adding video special effects in a video recording process according to an embodiment of the present application;
FIG. 11 is a schematic diagram of adding video special effects to a video frame according to an embodiment of the present application;
FIG. 12 is a second schematic diagram of adding video effects to a video frame according to an embodiment of the present disclosure;
FIG. 13 is a second schematic diagram of a video special effect adding process in the video recording process according to the embodiment of the present application;
fig. 14 is a third schematic diagram of a video special effect adding process in the video recording process according to the embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Currently, electronic devices may provide corresponding functionality to users through application programs. For example, the electronic device may provide video recording functionality to a user through a video recording application. Typically, a user may operate a video recording application such that the electronic device may record video in response to the user's operation of the video recording application. And after the video recording is finished, the electronic device stores the recorded video in the form of a video file (such as an MP4 file, a 3gp file, or the like). Then, the user uses the electronic device to perform post-processing on the stored video file, such as inserting pictures, animations, etc. into the video. In some embodiments, the electronic device may post-edit the video recorded by the video recording application, interspersed with pictures, animations, etc. in the video in response to user operation of the video editing software (e.g., quick clip application, etc.).
For example, as shown in fig. 1, the video recording application activates the camera and microphone in response to a received user's video recording operation, and communicates corresponding video recording parameters (e.g., video resolution, frame rate, etc.) to the video processing component (or device), and corresponding audio recording parameters (e.g., audio frame rate, etc.) to the audio processing component (or device). The video processing module processes the collected images according to video recording parameters and records videos; the microphone collects sound signals, the audio processing component processes the collected sound signals according to audio recording parameters, audio is recorded, and recorded video and recorded audio are synthesized to obtain video files (such as MP4 files and the like) containing the video and the audio. But the video file does not include pictures, animations, audio effects, etc., which are required by the user. Finally, the electronic equipment performs post-processing on the obtained video file, and edits pictures, animations, audio special effects and the like into the video file, so that a small video required by a user is obtained.
However, in the video recording method shown in fig. 1, the small video including the pictures, the animations, the audio effects and the like required by the user can be obtained through the post-processing of the video editing software, but the video editing software has high use threshold and large workload, so that many users are unfamiliar and even cannot use the video editing software, and the small video is not convenient and efficient to manufacture.
In view of this, the embodiment of the application provides a video recording method, so that an electronic device can insert corresponding pictures, animations, audio special effects and the like in the video recording process, and edit the video while recording, so that a small video required by a user can be obtained after the video recording is finished, the implementation mode is simplified, the use of the user is facilitated, and the user experience is improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings. For the convenience of understanding by those skilled in the art, some of the terms in the embodiments of the present application will be explained first.
1) Phonemes, which are the smallest phonetic units divided according to the natural properties of speech, are analyzed according to the pronunciation actions in syllables, one action constituting one phoneme, e.g. b, p, m, f, etc. Syllables are phonetic structural basic units formed by one or a plurality of phonemes, and the pronunciation of a Chinese character in Chinese is a syllable, such as Mandarin, which is formed by three syllables and can be decomposed into eight phonemes of p, u, t, o, ng, h, u and a.
2) Acoustic models, which are one of the most important parts in a speech recognition system, and language models, which are models that correspond acoustic features of speech to phonemes. The language model is a model for describing the probability distribution of words, reflects the probability distribution of words used in language recognition, and is widely applied to the fields of voice recognition, machine translation and the like. For example, a language model can be used to obtain a word sequence with the highest probability among multiple hypothesized word sequences in speech recognition, for example: by using the speech model, it is possible to search for a sentence or word sequence having the highest probability (highest matching degree) in a space (search space) formed by sentences or word sequences (word strings) that can be satisfied by the pronunciation sequence. Common language models include N-gram lm (N-gram language model), and the like.
In addition, it should be understood that "at least one" in embodiments of the present application refers to one or more. "plurality" means two or more. "and/or", describes an association relationship of the association object, indicating that three relationships may exist. For example, a and/or B may represent: a alone, a and B together, and B alone. Wherein A, B can be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c. Wherein each of a, b, c may itself be an element or a collection comprising one or more elements.
In this application, "exemplary," "in some embodiments," "in other embodiments," and the like are used to indicate an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion.
It should be noted that the terms "first," "second," and the like in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying any particular importance or order.
The electronic device of the embodiment of the application may be a portable terminal, such as a mobile phone, a tablet computer, a portable computer, a wearable electronic device (such as a smart watch, smart glasses, and a smart helmet), and the like. Exemplary, portable terminals include, but are not limited to, piggy-backs
Figure SMS_1
Hong Mong->
Figure SMS_2
Or other operating system. Furthermore, the present applicationThe electronic apparatus of the embodiment may also be other than a portable terminal, such as a desktop computer, etc., which is not limited thereto.
For example, as shown in fig. 2, a schematic hardware structure of an electronic device according to an embodiment of the present application is shown. Specifically, as shown, the electronic device includes a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include, among others, a pressure sensor, a gyroscope sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
Processor 110 may include one or more processing units. For example: the processor 110 may include an application processor (application processor, AP), a modem (modem), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or two or more different processing units may be integrated in one device.
A memory may also be provided in the processor 110 for storing computer programs and/or data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold computer programs and/or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the computer program and/or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. For example, the processor 110 includes a universal serial bus (universal serial bus, USB) interface 130, a subscriber identity module (subscriber identity module, SIM) interface 195. For another example, the processor 110 may also include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), and/or a general-purpose input/output (GPIO) interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge an electronic device, or may be used to transfer data between the electronic device and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as augmented reality (augmented reality, AR) devices, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device. The electronic device may support 2 or N SIM card interfaces, N being a positive integer greater than 2. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic equipment interacts with the network through the SIM card, so that the functions of communication, data communication and the like are realized. In some embodiments, the electronic device employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution of wireless communication including 2G/3G/4G/5G/(6G etc. subsequent evolution standards) etc. standard applied to the electronic device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc.
The wireless communication module 160 includes solutions that may provide wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on electronic devices.
In some embodiments, the antenna 1 and the mobile communication module 150 of the electronic device are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the electronic device can communicate with the network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS), and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device implements display functions through a GPU, a display screen 194, an application processor, and the like. The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a Miniled, microLed, micro-oeled, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature, etc. of the photographed scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, audio, video, etc. files are stored in an external memory card.
The internal memory 121 includes a running memory (memory) and a built-in memory. Wherein the running memory may be used for storing computer programs and/or data etc. The processor 110 executes various functional applications of the electronic device and data processing by running a computer program stored in a running memory. For example, the running memory may comprise high-speed random access memory. The internal memory may also be referred to as a built-in external memory, etc., and may be used to store computer programs and/or data. For example, the built-in memory may store an operating system, application programs, and the like. The electronic device typically loads the computer programs and/or data in the built-in memory into the running memory, so that the processor 110 runs the corresponding computer programs and/or data to implement the corresponding functions. Further, the internal memory 121 may include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), or the like.
The electronic device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, which may be used to indicate a state of charge, a change in charge, or an indication message, missed call, notification, etc.
It should be understood that the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device. In other embodiments of the present application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes the video recording process in detail in connection with inserting or adding scenes such as pictures, animations, audio special effects and the like in the video recording process.
Scene one: and inserting pictures or animations related to the video content in the video recording process.
Taking a video recording application supporting video "record while edit" as an example, an electronic device display interface 300 is shown in fig. 3. Interface 300 is a video recording interface for a video recording application and includes an image preview box 301, a video recording mode selection menu 302 (including regular, edit while recording, etc.), and virtual keys 303. The image preview box 301 is used for previewing the image collected by the camera. Virtual key 303 is used to control the start or stop of video recording. The video recording mode selection menu 302 is used to select a video recording mode, and the embodiment of the present application is illustrated by taking a recording-while-editing mode as an example. It will be appreciated that other controls may also be included on the video recording interface of the video recording application, such as virtual buttons for controlling front-to-back camera switching, etc.
After the camera of the electronic device is aligned with the lecturer, the video recording application in the electronic device can respond to the video recording operation of the user, start the camera and the microphone, transmit corresponding video recording parameters (such as video resolution, frame rate and the like) to the video processing component, transmit corresponding audio recording parameters (such as audio frame rate and the like) to the audio processing component, and start video recording. The video processing module processes the collected images according to video recording parameters and records videos; the microphone collects sound signals, the audio processing component processes the collected sound signals according to audio recording parameters, audio is recorded, and a video file is obtained by synthesizing recorded video and audio.
And when video recording is carried out, the video recording application can detect recorded audio in real time, namely detect the content of the lecturer lecture in real time, and if any first keyword in a first word stock is detected in the recorded audio, the video recording application acquires a picture or animation associated with the first keyword, and replaces a part of video frames recorded after the first keyword is detected with the picture or animation associated with the first keyword.
As shown in fig. 4, after the camera of the electronic device is aimed at the lecturer, the video recording application in the electronic device may start video recording in response to the operation of recording video by the user. After the video recording is started, firstly displaying the viewing angle of the lecturer in the recorded video, along with the lecture of the lecturer, when the video recording application detects that the first keyword 1 in the first word stock appears in the recorded audio, the electronic equipment can acquire the picture or the animation associated with the first keyword 1, replace part of video frames recorded after the recording moment of detecting the first keyword 1 with the picture or the animation associated with the first keyword 1, and display the picture or the animation associated with the first keyword 1 by the electronic equipment; after the electronic equipment displays the picture or the animation associated with the first keyword 1, switching back to the visual angle of the lecturer, as the lecturer continues to lecture, when the video recording application detects that the first keyword 2 in the first word stock appears in the recorded audio, the electronic equipment acquires the picture or the animation associated with the first keyword 2, replaces a part of video frames recorded after the recording time of the detected first keyword 2 with the picture or the animation associated with the first keyword 2, and switches back to the visual angle of the lecturer after the electronic equipment displays the picture or the animation associated with the first keyword 2, and so on until the video recording is finished.
It should be understood that, a first word stock associated with the picture or the animation is maintained in the video recording application, and one or more first keywords associated with the picture or the animation, such as countdown keywords, applause keywords, and the like, are stored in the first word stock, where the first keywords in the first word stock may be obtained by the video recording application from an application server corresponding to the video recording application, or may be configured by a user, and the embodiment of the present application is not limited. Specifically, the video recording application detects in real time whether the recorded audio contains the first keyword in the first word stock, and may take the following manner: as shown in fig. 5, taking the current recording time as 00:01:10 as an example, the video recording application may extract an audio frame with a set duration ending with the current recording time from the recorded audio, for example, extract an audio frame with the recording time of 00:01:07-00:01:10 from the recorded audio, extract acoustic features of the extracted audio frame, input the acoustic features into an acoustic model for processing, obtain a phoneme sequence corresponding to the acoustic features, and process the obtained phoneme sequence through a language model to obtain a word corresponding to the phoneme sequence. The video recording application searches and identifies whether the obtained words belong to the first keywords in the first word stock, if so, the video recording application determines that the first keywords in the first word stock are detected, otherwise, the video recording application does not process the first keywords.
As shown in fig. 6, when the video recording application detects that a first keyword in a first word stock appears in recorded audio, the video recording application may obtain a picture or animation associated with the first keyword from the cloud, for example, obtain a picture or animation associated with the first keyword from an application server corresponding to the video recording application, search a picture or animation search engine for a picture or animation with the highest matching degree with the first keyword, and so on. After the video recording application acquires the picture or the animation associated with the first keyword, the video recording application replaces the part of the video frames recorded after the current recording time with the picture or the animation associated with the first keyword. If the first keyword is associated with an animation, the video recording application may replace the video frames recorded after the current recording time with the video frames of the animation, and if the first keyword is associated with a picture, the video recording application may replace the video frames (e.g., 10 frames) recorded after the current recording time with the picture associated with the first keyword.
In addition, for the picture or animation associated with the first keyword, the video recording application may also record a start position (start recording time in the video) and an end position (end recording time in the video) of the replaced picture or animation, and save the video frame replaced by the picture or animation. After the video recording is completed, the user can also adjust the starting position and/or ending position of the picture or the animation associated with the first keyword in the recorded video. As shown in fig. 7, after the video recording is completed, the video recording application may enter a video editing interface, the video recording application displays the recorded video, the user may determine the start position and/or the end position of the picture or the animation associated with the first keyword in the recorded video by dragging the start position (Pic begin) and/or the end position (Pic end) of the picture or the animation associated with the first keyword, and may enter a picture or animation replacement interface associated with the first keyword by double clicking the picture or the animation associated with the first keyword, and the video recording application may display the picture or the animation stored in the album of the electronic device for the user to select, and after the user selects, replace the corresponding picture or animation in the recorded video with the picture or animation selected by the user. In addition, the user can cancel the picture or animation associated with the first keyword directly by pressing the picture or animation associated with the first keyword for a long time, and restore the picture or animation associated with the first keyword into the video frame replaced by the picture or animation in the recorded video.
After the user edits, if the video recording application does not detect the user operation after exceeding the set time period (e.g., after the operation detection timer times out), the video recording application may pop up the edit completion confirmation interface. If the user selects to confirm, the video recording application clears the mark position of the starting position and the ending position of the picture or the animation, ensures the compatibility with other video applications, and if the user selects to cancel, returns to an interface which can be edited by the user on the recorded video, and recalculates the duration of the operation of the user which is not detected.
As shown in fig. 8, the lecturer speaks: after thinking for a few seconds, the video recording application detects the first keyword "give a few seconds", and then a countdown animation associated with "give a few seconds" appears in the following video, for example, a picture of "countdown start" is displayed for one second, then the user enters a countdown interface, and after the countdown ends, the user returns to the lecturer viewing angle.
Scene II: inserting audio special effects in the video recording process, and the like.
The video recording application may further maintain a second word stock associated with the audio special effect, where one or more second keywords associated with the audio special effect, such as keywords of money, strong wind, etc., are stored in the second word stock, where the second keywords in the second word stock may be obtained by the video recording application from an application server corresponding to the video recording application, or may be configured by a user by himself, and the embodiment of the present application is not limited. The implementation of detecting, by the video recording application, whether the recorded audio includes the second keyword in the second word stock in real time may refer to the implementation of detecting, by the video recording application, whether the recorded audio includes the first keyword in the first word stock in real time, and will not be described in detail.
As shown in fig. 9, when the video recording application detects that the second keyword in the second word stock appears in the recorded audio, the video recording application may obtain the audio special effect associated with the second keyword from the cloud, for example, obtain the audio special effect associated with the second keyword from the application server corresponding to the video recording application, search the audio special effect search engine for the audio special effect with the highest matching degree with the second keyword, and so on. After the video recording application acquires the audio special effect associated with the second keyword, the video recording application replaces the part of the audio frames recorded after the current recording time with the audio frames associated with the second keyword. The video recording application may replace the same number of audio frames as the audio effect after the current recording time with the audio frames of the audio effect. As an example, if the presenter speaks "spend xxx money," the video recording application detects the second keyword "money," then a coin sound of the audio special effect "grommets" is added.
In addition, considering that second keywords such as "a 35881" and "a curse" may appear in the recorded audio, where the second keywords "BI" need to be masked, in the maintained second word stock associated with the audio special effects, a type of the second keywords may also be saved for each stored second keyword, for example: the second keyword of type a is a non-sensitive keyword and the second keyword of type B is a sensitive keyword. For the second keyword of the type A, when the video recording application detects that the second keyword of the type A appears in the recorded audio, replacing a part of audio frames recorded after the second keyword of the type A is detected with the audio special effect associated with the second keyword; for the second keyword of the type B, when the video recording application detects that the second keyword of the type B appears in the recorded audio, replacing a part of the audio frames containing the second keyword of the type B with the audio special effects associated with the second keyword. Wherein the audio effect associated with the second keyword for type B may be a silent effect (i.e., a segment of silent sound), or other masking sound effect, to effect masking of the sensitive second keyword.
Scene III: and adding a video special effect in the video recording process.
A library of gestures associated with video effects (e.g., keyword displays, effect text, watermarks, filters, etc.) may also be maintained in a video recording application. The key gestures (such as key gesture pictures) in the gesture library can be acquired by the video recording application from an application server corresponding to the video recording application. The video recording application detects recorded videos in real time, and detects whether the recorded videos contain key gestures in the gesture library in real time. Specifically, the video recording application detects in real time whether the recorded video contains gestures in the gesture library, and may take the following manner: as shown in fig. 10, taking the current recording time as an example, the video recording application may extract a video frame at the current recording time, identify whether an image area matching with a key gesture (a key gesture picture) of the gesture library exists in the video frame at the current recording time, determine that the key gesture exists in the video frame at the current recording time if the image area matching with the key gesture in the gesture library exists, cache the video frame at the current recording time to the video buffer queue, and cache a video frame including the key gesture recorded after the video frame at the current recording time to the video buffer queue until the recorded video frame no longer includes the key gesture.
In addition, after detecting that a key gesture exists in a video frame at the current recording time, the video recording application pops up a preview interface of a video special effect corresponding to the key gesture added in the video frame, so that a user can adjust the video special effect. For example: for video special effects such as keyword display, special effect words, watermarks and the like, a user can drag the video special effects to any position of a preview, the user can select the video special effects through double clicking, cancel the video special effects through long pressing, and the like. After the user selects the video special effect, the video recording application adds the same video special effect to the video frames containing the key gestures, which are cached in the video buffer queue after the video frames, according to the video special effect added to the video frames, and performs texture mixing on the video frames added with the video special effect, for example, performs texture mixing by using an open graphics library (open graphics library, openGL), so as to generate the video frames with new textures, thereby adding the video special effect to the video.
As shown in fig. 11, if the video recording application detects a keyword in two-hand expansion in real time in a recorded video, the video recording application detects an audio frame recorded after a current recording time for a set duration, detects whether the audio frame contains a keyword, for example, detects whether the audio frame contains a third keyword in a third word bank, or detects whether the audio frame contains a first keyword and/or a second keyword in a first word bank and/or a second word bank, and the like, if the audio frame contains a keyword, the video recording application pops up a preview interface of adding the keyword "good and share" to the video frame in which the keyword in two-hand expansion is detected, and simultaneously buffers the video frame in which the keyword in two-hand expansion is detected and the video frame in which the keyword in two-hand expansion is included after the video frame into a video buffer queue. The user can drag the good and share to change the position of the good and share in the preview interface, and double-click the good and share is selected. The video recording application adds keywords "good and share" to the video frames containing the key gestures expanded by both hands, which are cached in the video buffer queue after the video frames, according to the positions of the "good and share" in the video frames, and performs texture mixing on the video frames added with the keywords "good and share" to generate video frames with new textures, so that the keywords "good and share" are added to the video.
As shown in the preview interface in fig. 12, after the video recording application detects the key gesture of the thumb in real time in the recorded video, the preview interface with the special effect word "bar-bar" added on the video frame where the key gesture of the thumb is detected is popped up, and the user can drag the "bar-bar" to change the position of the "bar-bar" on the preview interface, and double-click the "bar-bar" to select the special effect word.
In another possible implementation, as shown in fig. 13, after detecting that a key gesture exists in a video frame at the current recording time, the video recording application records a timestamp corresponding to the video frame at the current recording time, and pops up a preview interface for adding a video special effect corresponding to the key gesture to the video frame, so that a user can adjust the video special effect. After the user selects the video special effect, the video recording application starts with the video frame according to the corresponding time stamp of the video frame, reads and detects the video frame containing the key gesture after the video frame until the video frame no longer contains the key gesture. And according to the video special effects added for the video frames, respectively adding the video special effects for the video frames containing the key gestures after the video frames, and performing texture mixing on the video frames added with the video special effects, for example, performing texture mixing by using OpenGL, so as to generate video frames with new textures, thereby adding the video special effects into the video.
In still another possible implementation, as shown in fig. 14, after detecting that the recorded video includes a key gesture in the gesture library, the video recording application may record a timestamp of a video frame in the recorded video, where the key gesture is detected, and a corresponding key gesture may be added to a video effect selection bitmap (bitmap), after the video recording is completed, the user may select, through the video effect selection bitmap, a timestamp of a video frame to be edited, enter a preview interface to which the video effect is added, after the user selects the video effect at the preview interface, the video recording application may add the video effect to the video frame including the key gesture after the video frame according to the video effect added to the video frame, and mix the video frames to which the video effect is added, so as to generate a video frame with a new texture, thereby adding the video effect to the video.
It should be appreciated that if the video file does not exist in the form of frame data (video frames and audio frames), the video recording application may decode some or all of the data in the video file into the form of frame data via a codec (e.g., a codec of a MediaCodec interface) and then encode and restore the frame data via the codec after processing the frame data (e.g., interleaving the video frames with pictures).
In addition, the video recording application can respond to the operation of the user for fast recording, and the video frames and the audio frames in the recorded video and audio are discarded at intervals according to the fast recording times; and the switching rate of video frames and audio frames in the recorded video and audio can be adjusted according to the slow recording multiple in response to the slow recording operation of the user.
It should be noted that, in the embodiment of the present application, the end of video recording may be triggered by a user through an operation, or may be automatically ended after the electronic device starts video recording for a preset duration, which is not limited. The implementation of inserting or adding pictures, animations and audio special effects in the video recording process of each scene can be used alone or in combination with each other to achieve different technical effects, and is not limited. In addition, the video recording scene related in the embodiment of the application is not limited to the scene in which the electronic equipment performs video recording on an external target through video recording application, such as performing video recording on a lecturer; the method also comprises the steps that the electronic equipment records scenes such as a screen through the video recording application, for example, when a user performs video call, the electronic equipment records the screen of the electronic equipment, call content of the user and the like through the video recording application.
It should be understood that the video recording method in the embodiment of the present application is described by taking a video recording application as an example, and the embodiment of the present application is not limited to use of the video recording application, and the video recording method may also be applied to a short video application, etc., which is not limited thereto.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the point of view that the electronic device is the execution subject. In order to implement the functions in the methods provided in the embodiments of the present application, the electronic device may include a hardware structure and/or a software module, where the functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Some of the functions described above are performed in a hardware configuration, a software module, or a combination of hardware and software modules, depending on the specific application of the solution and design constraints.
Embodiments of the present application also provide an electronic device, as shown in fig. 15, comprising one or more processors 1501, one or more memories 1502. The memory 1502 stores one or more computer programs that, when executed by the processor 1501, cause the electronic device to perform the video recording method provided by the embodiments of the present application.
Further, in some embodiments, the electronic device may also include a camera 1503 and a microphone 1504.
In other embodiments, the electronic device may also include a display screen 1505 for displaying a graphical user interface, such as an interface to a media application. By way of example, display 1505 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
In addition, the electronic device in the embodiment of the present application may further include a speaker, a touch sensor, and the like, which is not limited.
The connection medium between the processor 1501, the memory 1502, the camera 1503, the microphone 1504, and the display screen 1505 is not limited in the embodiment of the present application. For example, the processor 1501, the memory 1502, the camera 1503, the microphone 1504, and the display screen 1505 may be connected through buses in the embodiment of the present application, and the buses may be divided into address buses, data buses, control buses, and the like.
In the embodiments of the present application, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (volatile memory), for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
As used in the above embodiments, the term "when …" or "after …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc. The schemes of the above embodiments may be used in combination without conflict.
It is noted that a portion of this patent document contains material which is subject to copyright protection. The copyright owner has reserved copyright rights, except for making copies of patent documents or recorded patent document content of the patent office.

Claims (9)

1. A video recording method, comprising:
the electronic equipment detects a first operation for starting video recording;
the electronic equipment responds to the first operation and starts video recording;
when a first keyword in a first word stock is detected in recorded audio, the electronic equipment replaces a first number of video frames recorded after the first keyword is detected with pictures or animations associated with the first keyword;
when a second keyword in a second word stock is detected in the recorded audio, the electronic device replaces a second number of audio frames recorded after the second keyword is detected with the audio special effects associated with the second keyword or replaces a third number of audio frames containing the second keyword with the audio special effects associated with the second keyword.
2. The method of claim 1, wherein replacing the first number of video frames with the first keyword-associated picture or animation comprises:
The electronic equipment detects a second operation for replacing the picture or animation associated with the first keyword with the picture or animation provided by the user;
and the electronic equipment responds to the second operation, and replaces the picture or animation associated with the first keyword with the picture or animation provided by the user.
3. The method of claim 1 or 2, wherein the method further comprises:
the electronic device detecting a third operation for adjusting a start position or an end position of the picture or the animation in the recorded video;
and the electronic equipment responds to the third operation to adjust the starting position or the ending position of the picture or the animation in the recorded video.
4. A method according to any one of claims 1-3, wherein the method further comprises:
when a key gesture in a gesture library is detected in recorded videos, the electronic device adds the video special effect associated with the key gesture in a fourth number of video frames which start with the video frame in which the key gesture is detected and contain the key gesture.
5. The method of any one of claims 1-4, wherein the method further comprises:
The electronic equipment detects a fourth operation for starting the flash;
and the electronic equipment responds to the fourth operation, and discards video frames and audio frames in the recorded video and audio according to the quick record multiple interval.
6. The method of any one of claims 1-4, wherein the method further comprises:
the electronic equipment detects a fifth operation for starting the slow recording;
and the electronic equipment responds to the fifth operation, and adjusts the switching rate of video frames and audio frames in the recorded video and audio according to the slow recording multiple.
7. An electronic device comprising a processor and a memory;
the memory stores a computer program;
the computer program, when executed, causes the electronic device to perform the method of any of claims 1-6.
8. A computer readable storage medium, characterized in that the computer readable storage medium comprises a computer program which, when run on an electronic device, causes the electronic device to perform the method according to any of claims 1-6.
9. A chip, characterized in that the chip is coupled to a memory in an electronic device such that the chip, when run, invokes a computer program stored in the memory, implementing the method according to any of claims 1-6.
CN202110221755.4A 2020-10-22 2021-02-27 Video recording method, electronic equipment, storage medium and chip Active CN114390341B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020111424690 2020-10-22
CN202011142469 2020-10-22

Publications (2)

Publication Number Publication Date
CN114390341A CN114390341A (en) 2022-04-22
CN114390341B true CN114390341B (en) 2023-06-06

Family

ID=81194999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221755.4A Active CN114390341B (en) 2020-10-22 2021-02-27 Video recording method, electronic equipment, storage medium and chip

Country Status (1)

Country Link
CN (1) CN114390341B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230410396A1 (en) * 2022-06-17 2023-12-21 Lemon Inc. Audio or visual input interacting with video creation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004128692A (en) * 2002-09-30 2004-04-22 Ntt Comware Corp Method of imparting searching index for moving image, and apparatus and program thereof
CN107203279A (en) * 2017-05-24 2017-09-26 北京小米移动软件有限公司 Keyword reminding method and equipment
CN109547847A (en) * 2018-11-22 2019-03-29 广州酷狗计算机科技有限公司 Add the method, apparatus and computer readable storage medium of video information
CN110602386A (en) * 2019-08-28 2019-12-20 维沃移动通信有限公司 Video recording method and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9697235B2 (en) * 2014-07-16 2017-07-04 Verizon Patent And Licensing Inc. On device image keyword identification and content overlay
CN105451029B (en) * 2015-12-02 2019-04-02 广州华多网络科技有限公司 A kind of processing method and processing device of video image
US10880614B2 (en) * 2017-10-20 2020-12-29 Fmr Llc Integrated intelligent overlay for media content streams
CN108052927B (en) * 2017-12-29 2021-06-01 北京奇虎科技有限公司 Gesture processing method and device based on video data and computing equipment
CN109492577B (en) * 2018-11-08 2020-09-18 北京奇艺世纪科技有限公司 Gesture recognition method and device and electronic equipment
CN110417991B (en) * 2019-06-18 2021-01-29 华为技术有限公司 Screen recording method and electronic equipment
CN110855921B (en) * 2019-11-12 2021-12-03 维沃移动通信有限公司 Video recording control method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004128692A (en) * 2002-09-30 2004-04-22 Ntt Comware Corp Method of imparting searching index for moving image, and apparatus and program thereof
CN107203279A (en) * 2017-05-24 2017-09-26 北京小米移动软件有限公司 Keyword reminding method and equipment
CN109547847A (en) * 2018-11-22 2019-03-29 广州酷狗计算机科技有限公司 Add the method, apparatus and computer readable storage medium of video information
CN110602386A (en) * 2019-08-28 2019-12-20 维沃移动通信有限公司 Video recording method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李晓鹏 ; .浅谈数字课程出版中的视频制作与编辑加工.出版与印刷.2017,(04),全文. *

Also Published As

Publication number Publication date
CN114390341A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN113794800B (en) Voice control method and electronic equipment
WO2021213120A1 (en) Screen projection method and apparatus, and electronic device
WO2020078299A1 (en) Method for processing video file, and electronic device
CN112714214B (en) Content connection method, equipment, system, GUI and computer readable storage medium
CN110138959B (en) Method for displaying prompt of human-computer interaction instruction and electronic equipment
WO2021104485A1 (en) Photographing method and electronic device
CN110933330A (en) Video dubbing method and device, computer equipment and computer-readable storage medium
WO2020119455A1 (en) Method for repeating word or sentence during video playback, and electronic device
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN111465918B (en) Method for displaying service information in preview interface and electronic equipment
CN115191110A (en) Video shooting method and electronic equipment
CN111564152B (en) Voice conversion method and device, electronic equipment and storage medium
CN111669459A (en) Keyboard display method, electronic device and computer readable storage medium
CN112116904B (en) Voice conversion method, device, equipment and storage medium
CN112214636A (en) Audio file recommendation method and device, electronic equipment and readable storage medium
CN111881315A (en) Image information input method, electronic device, and computer-readable storage medium
CN114255745A (en) Man-machine interaction method, electronic equipment and system
CN111970401A (en) Call content processing method and electronic equipment
CN112015943A (en) Humming recognition method and related equipment
CN112632445A (en) Webpage playing method, device, equipment and storage medium
CN112383664A (en) Equipment control method, first terminal equipment and second terminal equipment
CN114363527A (en) Video generation method and electronic equipment
CN114697732A (en) Shooting method, system and electronic equipment
CN114390341B (en) Video recording method, electronic equipment, storage medium and chip
CN112416984A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant