CN108260009B - Video processing method, mobile terminal and computer readable storage medium - Google Patents

Video processing method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108260009B
CN108260009B CN201810075139.0A CN201810075139A CN108260009B CN 108260009 B CN108260009 B CN 108260009B CN 201810075139 A CN201810075139 A CN 201810075139A CN 108260009 B CN108260009 B CN 108260009B
Authority
CN
China
Prior art keywords
video
mobile terminal
identification information
pressing
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810075139.0A
Other languages
Chinese (zh)
Other versions
CN108260009A (en
Inventor
韩延罡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201810075139.0A priority Critical patent/CN108260009B/en
Publication of CN108260009A publication Critical patent/CN108260009A/en
Application granted granted Critical
Publication of CN108260009B publication Critical patent/CN108260009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a video processing method, which comprises the following steps: when a mobile terminal carries out video recording operation currently and receives an adding instruction of a video label, obtaining label time and first identification information corresponding to the adding instruction; generating a first video tag based on the first identification information; and adding the first video label in first video information obtained by the video recording operation based on the label moment. The invention also discloses a mobile terminal and a computer readable storage medium. According to the method and the device, the first video label can be added into the first video information obtained through the video recording operation according to the adding instruction, so that when the first video information is played, the content in the first video information can be searched through the first video label, the process of searching the video content corresponding to the first video label through the first video information can be simplified, and the searching efficiency is improved.

Description

Video processing method, mobile terminal and computer readable storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method, a mobile terminal, and a computer-readable storage medium.
Background
At present, with the rapid development of mobile terminal technologies such as mobile phones and tablet computers, the application of mobile terminals such as mobile phones and tablet computers is more and more extensive, and users are more and more accustomed to watching video information and recording videos by using the mobile terminals in daily life.
Generally, when a user wants to watch a video content of a certain segment in a certain recorded video information, the user mainly searches from the beginning of the video information step by means of manual fast forwarding and the like, and the searching efficiency is low and depends heavily on the size of a video file. For example, a user records a video segment including important information during a video recording process, and after the recording is completed, if the user needs to check the video segment, the user can only search the recorded video information in a manner of manual fast forwarding and the like, and the searching process is very complicated.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a video processing method, a mobile terminal and a computer readable storage medium, and aims to solve the technical problems of complex searching process and low efficiency when video content of a certain segment is watched in recorded video information in the prior art.
In order to achieve the above object, the present invention provides a video processing method, including the steps of:
when a mobile terminal carries out video recording operation currently and receives an adding instruction of a video label, obtaining label time and first identification information corresponding to the adding instruction;
generating a first video tag based on the first identification information;
and adding the first video label in first video information obtained by the video recording operation based on the label moment.
Further, in an embodiment, the video processing method further includes:
when the mobile terminal carries out video playing operation currently and receives a playing instruction of a video label, acquiring second identification information corresponding to the playing instruction;
determining whether a second video tag exists in second video information corresponding to the video playing operation;
and when the second video label exists in the second video information, playing the second video information based on the label moment corresponding to the second video label.
Further, in an embodiment, a frame of the mobile terminal is provided with a plurality of pressure sensors, and the step of acquiring a tag time and first identification information corresponding to an adding instruction when the mobile terminal currently performs a video recording operation and receives the adding instruction of a video tag includes:
when a mobile terminal carries out video recording operation currently and detects a first pressing operation triggered based on a pressure sensor, acquiring a first pressing parameter of the first pressing operation, wherein the first pressing parameter comprises a first pressure value and a first pressing duration;
and when the first pressure value is larger than a preset pressure threshold value and the first pressing time length is larger than or equal to the preset time length, setting the current time as the label time, and setting the identification information of the pressure sensor triggering the pressing operation as the first identification information.
Further, in an embodiment, the step of acquiring, when the mobile terminal performs a video playing operation currently and receives a playing instruction of a video tag, second identification information corresponding to the playing instruction includes:
when the mobile terminal carries out video playing operation currently and detects second pressing operation triggered based on the pressure sensor, second pressing parameters of the second pressing operation are obtained, wherein the second pressing parameters comprise a pressure value and a second pressing duration;
and when the second pressure value is greater than the preset pressure threshold value and the second pressing time length is greater than or equal to the preset time length, setting the identification information of the pressure sensor triggering the second pressing operation as the second identification information.
Further, in an embodiment, the step of acquiring a tag time and first identification information corresponding to an adding instruction when the mobile terminal currently performs a video recording operation and receives the adding instruction of a video tag includes:
when a mobile terminal carries out video recording operation currently and detects that a fingerprint detection module of the mobile terminal receives first fingerprint information, acquiring input time corresponding to the first fingerprint information;
and setting the input time as the label time, and setting the first fingerprint information as the first identification information.
Further, in an embodiment, the step of acquiring, when the mobile terminal performs a video playing operation currently and receives a playing instruction of a video tag, second identification information corresponding to the playing instruction includes:
and when the mobile terminal carries out video playing operation currently and detects that the fingerprint detection module receives second fingerprint information, setting the second fingerprint information as the second identification information.
Further, in an embodiment, the step of acquiring a tag time and first identification information corresponding to an adding instruction when the mobile terminal currently performs a video recording operation and receives the adding instruction of a video tag includes:
when a mobile terminal carries out video recording operation currently and detects a third pressing operation triggered by a pressure sensor based on the mobile terminal, acquiring the pressing times within a preset time interval corresponding to the third pressing operation;
and when the pressing times in a preset time interval meet a preset condition, setting the current time as the label time, and setting the pressing times as the first identification information.
Further, in an embodiment, the step of generating a first video tag based on the first identification information includes:
determining whether a video tag corresponding to the first identification information exists in the first video information;
and when the video tag corresponding to the first identification information does not exist in the first video information, generating a first video tag based on the first identification information.
In addition, to achieve the above object, the present invention also provides a mobile terminal, including: a memory, a processor and a video processing program stored on the memory and executable on the processor, the video processing program when executed by the processor implementing the steps of the video processing method of any of the above.
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium having a video processing program stored thereon, the video processing program, when executed by a processor, implementing the steps of the video processing method according to any one of the above.
The invention can add the first video label in the first video information obtained by the video recording operation according to the adding instruction by acquiring the label time and the first identification information corresponding to the adding instruction when the mobile terminal currently performs the video recording operation and receives the adding instruction of the video label, then generates the first video label based on the first identification information, then adds the first video label in the first video information obtained by the video recording operation based on the label time, and further can search the content in the first video information by the first video label when playing the first video information, even directly fast forward the first video information to the position corresponding to the first video label, thereby simplifying the process of searching the video content corresponding to the first video label by the first video information and improving the searching efficiency, the user experience is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention;
fig. 2 is a diagram of a communication network system architecture according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a video processing method according to a first embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a video processing method according to a second embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a display of video information played by a mobile terminal according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention;
fig. 8 is a schematic detailed flow chart of a step of acquiring a tag time and first identification information corresponding to an adding instruction when a mobile terminal currently performs a video recording operation and receives the adding instruction of a video tag in a third embodiment of the video processing method according to the present invention;
fig. 9 is a detailed flowchart of a step of acquiring second identification information corresponding to a video playing instruction when the mobile terminal currently performs a video playing operation and receives the playing instruction of a video tag in the fourth embodiment of the video processing method according to the present invention;
fig. 10 is a detailed flowchart of a step of acquiring a tag time and first identification information corresponding to an adding instruction when a video recording operation is currently performed by a mobile terminal and the adding instruction of a video tag is received in a fifth embodiment of the video processing method according to the present invention;
fig. 11 is a schematic detailed flow chart of a step of acquiring a tag time and first identification information corresponding to an adding instruction when a mobile terminal currently performs a video recording operation and receives the adding instruction of a video tag in a sixth embodiment of the video processing method according to the present invention;
fig. 12 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention;
fig. 13 is a flowchart illustrating a detailed process of the step of generating the first video tag based on the first identification information in the seventh embodiment of the video processing method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, Wi-Fi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal 100 depicted in fig. 1 is not intended to be limiting of the mobile terminal 100, and that the mobile terminal 100 may include more or less components than those shown, or some components may be combined, or a different arrangement of components.
The various components of the mobile terminal 100 are described in detail below with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution), etc.
Wi-Fi belongs to a short-distance wireless transmission technology, and the mobile terminal 100 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the Wi-Fi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal 100, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the Wi-Fi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the Wi-Fi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal 100, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby monitoring the mobile terminal 100 as a whole. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
Further, in the mobile terminal shown in fig. 1, the processor 110 is configured to call the video processing program stored in the memory 109, and perform the following operations:
when a mobile terminal carries out video recording operation currently and receives an adding instruction of a video label, obtaining label time and first identification information corresponding to the adding instruction;
generating a first video tag based on the first identification information;
and adding the first video label in first video information obtained by the video recording operation based on the label moment.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
when the mobile terminal carries out video playing operation currently and receives a playing instruction of a video label, acquiring second identification information corresponding to the playing instruction;
determining whether a second video tag exists in second video information corresponding to the video playing operation;
and when the second video label exists in the second video information, playing the second video information based on the label moment corresponding to the second video label.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
when a mobile terminal carries out video recording operation currently and detects a first pressing operation triggered based on a pressure sensor, acquiring a first pressing parameter of the first pressing operation, wherein the first pressing parameter comprises a first pressure value and a first pressing duration;
and when the first pressure value is larger than a preset pressure threshold value and the first pressing time length is larger than or equal to the preset time length, setting the current time as the label time, and setting the identification information of the pressure sensor triggering the pressing operation as the first identification information.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
when the mobile terminal carries out video playing operation currently and detects second pressing operation triggered based on the pressure sensor, second pressing parameters of the second pressing operation are obtained, wherein the second pressing parameters comprise a pressure value and a second pressing duration;
and when the second pressure value is greater than the preset pressure threshold value and the second pressing time length is greater than or equal to the preset time length, setting the identification information of the pressure sensor triggering the second pressing operation as the second identification information.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
when a mobile terminal carries out video recording operation currently and detects that a fingerprint detection module of the mobile terminal receives first fingerprint information, acquiring input time corresponding to the first fingerprint information;
and setting the input time as the label time, and setting the first fingerprint information as the first identification information.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
and when the mobile terminal carries out video playing operation currently and detects that the fingerprint detection module receives second fingerprint information, setting the second fingerprint information as the second identification information.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
when a mobile terminal carries out video recording operation currently and detects a third pressing operation triggered by a pressure sensor based on the mobile terminal, acquiring the pressing times within a preset time interval corresponding to the third pressing operation;
and when the pressing times in a preset time interval meet a preset condition, setting the current time as the label time, and setting the pressing times as the first identification information.
Further, the processor 110 may call the video processing program stored in the memory 109, and further perform the following operations:
determining whether a video tag corresponding to the first identification information exists in the first video information;
and when the video tag corresponding to the first identification information does not exist in the first video information, generating a first video tag based on the first identification information.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the mobile terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above terminal hardware structure and communication network system, various embodiments of the video processing method of the present invention are proposed.
Referring to fig. 3, fig. 3 is a flowchart illustrating a video processing method according to a first embodiment of the present invention.
In this embodiment, the video processing method includes the steps of:
step S10, when the mobile terminal currently performs video recording operation and receives an adding instruction of a video label, acquiring a label time and first identification information corresponding to the adding instruction;
in this embodiment, the mobile terminal includes terminals such as a smart phone, an IPAD, an intelligent reader, and the like, and when the mobile terminal currently performs a video recording operation, the mobile terminal may monitor an adding instruction of a video tag in real time, and when the adding instruction of the video tag is monitored, obtain a tag time and first identification information corresponding to the adding instruction, where the tag time is the current time, and the first identification information may be information of a module that triggers the adding instruction in the mobile terminal.
It should be noted that, referring to fig. 4, the mobile terminal may be provided with a pressure sensor to trigger the add instruction through the pressure sensor, or trigger the add instruction through a fingerprint detection module of the mobile terminal, and in other embodiments, the add instruction may also be triggered through an acceleration sensor provided in the mobile terminal.
When the adding instruction is triggered by the pressure sensor, the number and the setting positions of the pressure sensors are not limited, for example, one or more pressure sensors are provided, the pressure sensors can be arranged at any position of the mobile terminal frame, and only when the processor of the mobile terminal detects that a pressure signal exists on the pressure sensor corresponding to the interface which is enabled to be used as pressure detection, the corresponding operation is executed. Based on the convenience of user operation control, the selection of multiple sensors is simplified compared with the selection of a control mode corresponding to one sensor, for example, different sequencing and combination modes can be formed among the multiple sensors. When the mobile terminal carries out video recording operation currently, the mobile terminal detects whether pressing operation aiming at the pressure sensor exists or not in real time, and acquires pressing parameters of the pressing operation when the pressing operation triggered by the pressure sensor based on the mobile terminal is detected, wherein the pressing parameters comprise parameters such as a pressure value of the pressure sensor, the number of times of pressing the pressure sensor, the time of pressing the pressure sensor and the like, and when the acquired pressing parameters meet preset conditions, the adding instruction is triggered.
Step S20, generating a first video tag based on the first identification information;
in this embodiment, when the tag time and the first identification information corresponding to the adding instruction are acquired, a first video tag is generated according to the first identification information. Specifically, a video tag may be stored in the mobile terminal in advance, and when first identification information corresponding to the adding instruction is acquired, the first identification information is added to the video tag to generate a first video tag, or when the adding instruction pressure sensor triggers and the mobile terminal is provided with a plurality of pressure sensors for triggering different adding instructions by a plurality of users, video tags corresponding to identification information of the pressure sensors one to one may be stored in the mobile terminal in advance, and when the first identification information corresponding to the adding instruction is acquired, a corresponding video tag, that is, a first video tag, is searched according to the first identification information.
If the adding instruction is triggered by a pressure sensor, when the mobile terminal is provided with a plurality of pressure sensors for triggering the adding instruction, the first identification information comprises the position information of the pressure sensor currently triggering the adding instruction or the unique identification information of the pressure sensor currently triggering the adding instruction; when the mobile terminal is provided with only one pressure sensor for triggering the add instruction, the first identification information includes the number of times the pressure sensor is pressed. If the adding instruction is triggered by a fingerprint detection module of the mobile terminal, the first identification information comprises fingerprint information currently detected by the fingerprint detection module.
It should be noted that, in this embodiment, when the first identification information is obtained, it is determined whether a video tag corresponding to the first identification information exists in the first video information obtained by the video recording operation, and when the video tag corresponding to the first identification information does not exist in the first video information, a first video tag is generated based on the first identification information, if the video tag corresponding to the first identification information exists in the first video information, a prompt message is output to remind the user that the video tag corresponding to the adding instruction already exists currently, in order to avoid repetition of the identification information of the video tag, the user is requested to reset the first identification information, that is, to trigger the adding instruction again, so as to avoid that in the process of playing the first video information, when the user needs to watch the video content of the segment corresponding to the first identification information, the mobile terminal finds a plurality of video tags through the first identification information, the first video information cannot be played due to the disordered program operation.
Step S30, adding the first video tag to the first video information obtained by the video recording operation based on the tag time.
In this embodiment, when the first video tag is obtained, the mobile terminal adds the first video tag to the first video information obtained by the video recording operation based on the tag time, that is, adds the first video tag to the time point corresponding to the tag time in the first video information, so that when the first video information is subsequently played, the mobile terminal can accurately and quickly find the corresponding tag time through the first video tag and play the first video information at the tag time.
In the video processing method provided in this embodiment, when a mobile terminal performs a video recording operation currently and receives an adding instruction of a video tag, a tag time and first identification information corresponding to the adding instruction are obtained, a first video tag is generated based on the first identification information, and then the first video tag is added to first video information obtained by the video recording operation based on the tag time, so that the first video tag can be added to the first video information obtained by the video recording operation according to the adding instruction, and when the first video information is played, content in the first video information can be searched through the first video tag, and even the first video information is directly fast forwarded to a position corresponding to the first video tag, so that a process of searching for video content corresponding to the first video tag through the first video information can be simplified, the searching efficiency is improved, and the user experience is improved.
Based on the first embodiment, a second embodiment of the video processing method of the present invention is proposed, and referring to fig. 5, in this embodiment, the video processing method further includes:
step S40, when the mobile terminal carries out video playing operation currently and receives a playing instruction of a video label, acquiring second identification information corresponding to the playing instruction;
in this embodiment, when the mobile terminal currently performs a video playing operation, the mobile terminal may monitor a playing instruction of a video tag in real time, and when the playing instruction of the video tag is monitored, obtain second identification information corresponding to the playing instruction, where the second identification information may be information of a module in the mobile terminal that triggers the playing instruction.
It should be noted that the mobile terminal may be provided with a pressure sensor, so as to trigger the play instruction through the pressure sensor, or trigger the play instruction through a fingerprint detection module of the mobile terminal, and in other embodiments, the play instruction may also be triggered through an acceleration sensor provided in the mobile terminal.
When a play instruction is triggered through the pressure sensor, namely when the mobile terminal carries out video recording operation currently, the mobile terminal detects whether a pressing operation aiming at the pressure sensor exists or not in real time, and acquires a pressing parameter of the pressing operation when the pressing operation triggered by the pressure sensor based on the mobile terminal is detected, wherein the pressing parameter comprises parameters such as a pressure value of the pressure sensor, the number of times of pressing the pressure sensor, the time of pressing the pressure sensor and the like, and when the acquired pressing parameter meets a preset condition, the play instruction is triggered.
Step S50, determining whether the second video tag exists in the second video information corresponding to the video playing operation;
in this embodiment, when the second identification information corresponding to the play instruction is obtained, the mobile terminal determines whether the second video tag exists in the second video information corresponding to the video play operation, specifically, the mobile terminal obtains all the video tags in the second video information corresponding to the video play operation, and compares all the obtained video tags with the second video tag to determine whether the second video tag exists in the second video information corresponding to the video play operation, and when all the video tags include the second video tag, determines that the second video tag exists in the second video information corresponding to the video play operation.
Step S60, when the second video tag exists in the second video information, playing the second video information based on the tag moment corresponding to the second video tag.
In this embodiment, when the second video tag exists in the second video information, that is, when all the video tags in the second video information include the second video tag, the mobile terminal plays the second video information based on the tag time corresponding to the second video tag, specifically, the mobile terminal fast forwards or fast backwards the second video information to the tag time corresponding to the second video tag for playing.
Referring to fig. 6 and 7, fig. 6 shows a video picture of video information starting to be played when the mobile terminal currently performs a video playing operation, and fig. 7 shows a playing picture of video information playing according to a tag moment corresponding to a second video tag, where the playing picture in fig. 7 can be accurately positioned by fast forwarding the video information according to the second video tag, thereby simplifying a process of searching for video content corresponding to the second video tag through the second video information.
In the video processing method provided by this embodiment, when the mobile terminal performs a video playing operation currently and receives a playing instruction of a video tag, second identification information corresponding to the playing instruction is obtained, it is then determined whether the second video tag exists in second video information corresponding to the video playing operation, and then when the second video tag exists in the second video information, the second video information is played at any time based on the tag corresponding to the second video tag, and when the second video information is played, the second video information can be fast-forwarded or fast-backed to a position corresponding to the second video tag for playing, so that a process of searching for video content corresponding to the second video tag through the second video information can be simplified, the searching efficiency is improved, and the user experience is improved.
Based on the second embodiment, a third embodiment of the video processing method of the present invention is proposed, and referring to fig. 8, in this embodiment, a frame of the mobile terminal is provided with a plurality of pressure sensors, and step S10 includes:
step S11, when the mobile terminal carries out video recording operation currently and detects a first pressing operation triggered based on a pressure sensor, acquiring a first pressing parameter of the first pressing operation, wherein the first pressing parameter comprises a first pressure value and a first pressing duration;
in this embodiment, referring to fig. 4, a frame of the mobile terminal is provided with a plurality of pressure sensors, and the adding instructions of different video tags can be triggered by the pressure sensors, so that when the mobile terminal currently performs a video recording operation, the mobile terminal can detect a pressing operation triggered by the pressure sensors in real time.
In this embodiment, when the mobile terminal currently performs a video recording operation, if a first pressing operation triggered based on a pressure sensor is detected, a first pressing parameter for acquiring the first pressing operation is obtained, where the first pressing parameter includes a first pressure value and a first pressing duration.
Step S12, when the first pressure value is greater than a preset pressure threshold and the first pressing duration is greater than or equal to a preset duration, setting the current time as the label time, and setting the identification information of the pressure sensor triggering the pressing operation as the first identification information.
The preset pressure threshold and the preset time can be reasonably set, or set by a user.
In this embodiment, when the first pressure value and the first pressing duration are obtained, the mobile terminal determines whether the first pressure value is greater than a preset pressure threshold, when the first pressure value is greater than the preset pressure threshold, the mobile terminal determines whether the first pressing duration is greater than or equal to the preset duration, and when the first pressing duration is greater than or equal to the preset duration, the mobile terminal sets the current time as the tag time, and sets the identification information of the pressure sensor triggering the pressing operation as the first identification information, so that an adding instruction of the video tag can be prevented from being triggered by a user misoperation.
Because the frame of the mobile terminal is provided with the plurality of pressure sensors, each pressure sensor can be numbered in advance, so that each pressure sensor has a unique number, and the number is used as the identification information of the pressure sensor. Or, since the positions of the pressure sensors in the frame of the mobile terminal are different, the position information of each pressure sensor in the mobile terminal can also be used as the identification information of the pressure sensor.
In the video processing method provided in this embodiment, when the mobile terminal currently performs a video recording operation and detects a first pressing operation triggered by a pressure sensor, a first pressing parameter of the first pressing operation is obtained, then setting the current time as the label time when the first pressure value is greater than a preset pressure threshold value and the first pressing time is greater than or equal to a preset time, the identification information of the pressure sensor triggering the pressing operation is set as the first identification information, so that the adding instruction of the video tag is accurately triggered according to the pressing parameters of the pressing operation, the tag moment and the first identification information are obtained, the accuracy of the adding instruction of the video tag is improved, the condition that the adding instruction of the video tag is triggered due to misoperation of a user can be avoided, and the user experience is further improved.
Based on the third embodiment, a fourth embodiment of the video processing method of the present invention is proposed, and referring to fig. 9, in this embodiment, step S40 includes:
step S41, when the mobile terminal currently performs a video playing operation and detects a second pressing operation triggered by the pressure sensor, acquiring a second pressing parameter of the second pressing operation, wherein the second pressing parameter comprises a pressure value and a second pressing duration;
in this embodiment, when the mobile terminal performs a video playing operation currently, the mobile terminal may trigger a playing instruction of a different video tag through each pressure sensor, and when the mobile terminal performs a video playing operation currently, the mobile terminal may detect a pressing operation triggered based on the pressure sensor in real time.
When the mobile terminal carries out video playing operation currently, if a second pressing operation triggered based on the pressure sensor is detected, a second pressing parameter of the second pressing operation is obtained, namely a second pressing time of the second pressing operation and a pressure value of the pressure sensor triggering the second pressing operation are obtained.
Step S42, when the second pressure value is greater than the preset pressure threshold value and the second pressing duration is greater than or equal to the preset duration, setting the identification information of the pressure sensor that triggers the second pressing operation as the second identification information.
In this embodiment, when a second pressure value and a second pressing duration are obtained, the mobile terminal determines whether the second pressure value is greater than a preset pressure threshold, and when the second pressure value is greater than the preset pressure threshold, the mobile terminal determines whether the second pressing duration is greater than or equal to the preset duration, and when the second pressing duration is greater than or equal to the preset duration, the mobile terminal sets identification information of a pressure sensor triggering the second pressing operation as the second identification information.
According to the video processing method provided by the embodiment, when the mobile terminal carries out video playing operation currently and detects second pressing operation triggered by the pressure sensor, the second pressing parameter of the second pressing operation is acquired, then the second pressure value is greater than the preset pressure threshold value, and the second pressing time is greater than or equal to the preset time, the identification information of the pressure sensor triggering the second pressing operation is set as the second identification information, so that the video tag playing instruction is accurately triggered according to the pressing parameter of the pressing operation, the second identification information is acquired, the accuracy of the video tag playing instruction is improved, the situation that the video tag playing instruction is triggered due to misoperation of a user can be avoided, and the user experience is further improved.
A fifth embodiment of the video processing method of the present invention is proposed based on the second embodiment, and referring to fig. 10, in this embodiment, step S10 includes:
step S13, when the mobile terminal currently performs video recording operation and detects that a fingerprint detection module of the mobile terminal receives first fingerprint information, acquiring an input moment corresponding to the first fingerprint information;
in this embodiment, the mobile terminal is provided with a fingerprint detection module, and the addition instruction of the video tag can be triggered through the fingerprint detection module, so that when the mobile terminal performs video recording operation currently, the mobile terminal can detect the detection result based on the fingerprint detection module in real time.
In this embodiment, when the mobile terminal performs a video recording operation currently, if it is detected that the fingerprint detection module of the mobile terminal receives the first fingerprint information, an input time corresponding to the first fingerprint information, that is, a current time, is obtained.
Step S14, setting the input time as the label time, and setting the first fingerprint information as the first identification information.
Further, in an embodiment, step S40 includes: and when the mobile terminal carries out video playing operation currently and detects that the fingerprint detection module receives second fingerprint information, setting the second fingerprint information as the second identification information.
In this embodiment, a play instruction of the video tag may be triggered by the fingerprint detection module, and when the mobile terminal performs a video play operation currently, the mobile terminal may detect a detection result based on the fingerprint detection module in real time.
In this embodiment, when the mobile terminal currently performs a video playing operation, if it is detected that the fingerprint detection module of the mobile terminal receives the second fingerprint information, the second fingerprint information is set as the second identification information.
According to the video processing method provided by the embodiment, when the mobile terminal carries out video recording operation currently and detects that the fingerprint detection module of the mobile terminal receives first fingerprint information, the input time corresponding to the first fingerprint information is acquired, then the input time is set as the label time, and the first fingerprint information is set as the first identification information, so that the video label adding instruction is accurately triggered according to the detection result of the fingerprint detection module, the accuracy of the video label adding instruction is improved, the situation that the video label adding instruction is triggered due to misoperation of a user can be avoided, and the user experience is further improved.
Based on the second embodiment, a sixth embodiment of the video processing method of the present invention is proposed, and referring to fig. 11, in this embodiment, the video processing method further includes: step S10 includes:
step S15, when the mobile terminal carries out video recording operation currently and detects a third pressing operation triggered by a pressure sensor based on the mobile terminal, obtaining the pressing times within a preset time interval corresponding to the third pressing operation;
in this embodiment, referring to fig. 12, a frame of the mobile terminal is provided with a pressure sensor, and the pressure sensor can trigger an adding instruction of different video tags, so that when the mobile terminal currently performs a video recording operation, the mobile terminal can detect a pressing operation triggered by the pressure sensor in real time.
In this embodiment, when the mobile terminal currently performs a video recording operation, if a third pressing operation triggered by the pressure sensor is detected, the pressing times within a preset time interval corresponding to the obtained third pressing operation are obtained.
Step S16, when the pressing times within a preset time interval satisfy a preset condition, setting the current time as the tag time, and setting the triggering pressing times as the first identification information.
In this embodiment, when the pressing times within a preset time interval satisfy a preset condition, the current time is set as the tag time, and the pressing times are triggered to be set as the first identification information, so that the corresponding adding instruction can be triggered through the pressing times.
The adding instructions of different video tags can be triggered by the same pressure sensor of the mobile terminal, specifically, different preset conditions can be set according to different pressing times, for example, when the pressing times in a preset time interval are within a preset time range, the adding instructions of the video tags can be triggered, meanwhile, the preset time range is grouped to trigger the adding instructions of different video tags, for example, the preset time range can be 3-11, the preset time range can be split into 3-5, 6-8, 9-10 and other different groups, or the preset time range can be 3-6, and each pressing time in the preset time range corresponds to one adding instruction of one video tag.
Further, step S40 includes: when the mobile terminal carries out video playing operation currently and detects a fourth pressing operation triggered based on a pressure sensor, acquiring the pressing times within a preset time interval corresponding to the fourth pressing operation; and when the pressing times in a preset time interval meet a preset condition, setting the pressing times as the second identification information.
According to the video processing method provided by the embodiment, when the mobile terminal carries out video recording operation currently and detects the third pressing operation triggered by the pressure sensor based on the mobile terminal, the pressing times within the preset time interval corresponding to the third pressing operation are obtained, then when the pressing times within the preset time interval meet the preset condition, the current time is set as the label time, and the pressing times are set as the first identification information, so that the video label adding instruction is triggered accurately according to the pressing times of the pressing operation, the pressing times are used as the first identification information, the accuracy of the video label adding instruction is improved, the situation that the video label adding instruction is triggered due to misoperation of a user can be avoided, and the user experience is further improved.
Based on the above-described embodiment, a seventh embodiment of the video processing method of the present invention is proposed, and referring to fig. 13, in this embodiment, step S20 includes:
step S21, determining whether a video tag corresponding to the first identification information exists in the first video information;
in this embodiment, when the first video information is subsequently played, the mobile terminal may accurately and quickly find the corresponding tag time through the first video tag, and when the first identification information is obtained, the mobile terminal determines whether the video tag corresponding to the first identification information exists in the first video information obtained by the video recording operation.
Specifically, whether the first video information currently has an added video tag is determined, when the added video tag exists, if the first identification information is the position information or the number of the pressure sensor, whether the identification information of the added video tag includes the position information or the number of the pressure sensor is determined, and when the identification information of the added video tag does not include the position information or the number of the pressure sensor, it is determined that the video tag corresponding to the first identification information does not exist in the first video information; if the first identification information is the fingerprint information detected by the fingerprint detection module, determining whether the identification information of the added video tag comprises the fingerprint information, and if not, determining that the video tag corresponding to the first identification information does not exist in the first video information; and if the first identification information is the pressing times, determining whether the identification information of the added video tag comprises the pressing times, and if not, determining that the video tag corresponding to the first identification information does not exist in the first video information.
Step S22, when the video tag corresponding to the first identification information does not exist in the first video information, generating a first video tag based on the first identification information.
In the video processing method provided by this embodiment, whether the video tag corresponding to the first identification information exists in the first video information is determined, and then when the video tag corresponding to the first identification information does not exist in the first video information, the first video tag is generated based on the first identification information, so that the video tags corresponding to a plurality of pieces of first identification information in the first video information can be avoided, and further, the situation that the mobile terminal finds a plurality of video tags through the first identification information when the first video information is subsequently played, so that program running disorder is caused and even the first video information cannot be played can be avoided, and user experience is further improved.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a video processing program is stored on the computer-readable storage medium, and when executed by a processor, the video processing program implements the following operations:
when a mobile terminal carries out video recording operation currently and receives an adding instruction of a video label, obtaining label time and first identification information corresponding to the adding instruction;
generating a first video tag based on the first identification information;
and adding the first video label in first video information obtained by the video recording operation based on the label moment.
Further, the video processing program when executed by the processor further performs the following operations:
when the mobile terminal carries out video playing operation currently and receives a playing instruction of a video label, acquiring second identification information corresponding to the playing instruction;
determining whether a second video tag exists in second video information corresponding to the video playing operation;
and when the second video label exists in the second video information, playing the second video information based on the label moment corresponding to the second video label.
Further, the video processing program when executed by the processor further performs the following operations:
when a mobile terminal carries out video recording operation currently and detects a first pressing operation triggered based on a pressure sensor, acquiring a first pressing parameter of the first pressing operation, wherein the first pressing parameter comprises a first pressure value and a first pressing duration;
and when the first pressure value is larger than a preset pressure threshold value and the first pressing time length is larger than or equal to the preset time length, setting the current time as the label time, and setting the identification information of the pressure sensor triggering the pressing operation as the first identification information.
Further, the video processing program when executed by the processor further performs the following operations:
when the mobile terminal carries out video playing operation currently and detects second pressing operation triggered based on the pressure sensor, second pressing parameters of the second pressing operation are obtained, wherein the second pressing parameters comprise a pressure value and a second pressing duration;
and when the second pressure value is greater than the preset pressure threshold value and the second pressing time length is greater than or equal to the preset time length, setting the identification information of the pressure sensor triggering the second pressing operation as the second identification information.
Further, the video processing program when executed by the processor further performs the following operations:
when a mobile terminal carries out video recording operation currently and detects that a fingerprint detection module of the mobile terminal receives first fingerprint information, acquiring input time corresponding to the first fingerprint information;
and setting the input time as the label time, and setting the first fingerprint information as the first identification information.
Further, the video processing program when executed by the processor further performs the following operations:
and when the mobile terminal carries out video playing operation currently and detects that the fingerprint detection module receives second fingerprint information, setting the second fingerprint information as the second identification information.
Further, the video processing program when executed by the processor further performs the following operations:
when a mobile terminal carries out video recording operation currently and detects a third pressing operation triggered by a pressure sensor based on the mobile terminal, acquiring the pressing times within a preset time interval corresponding to the third pressing operation;
and when the pressing times in a preset time interval meet a preset condition, setting the current time as the label time, and setting the pressing times as the first identification information.
Further, the video processing program when executed by the processor further performs the following operations:
determining whether a video tag corresponding to the first identification information exists in the first video information;
and when the video tag corresponding to the first identification information does not exist in the first video information, generating a first video tag based on the first identification information.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A video processing method, characterized in that the video processing method comprises the steps of:
when a mobile terminal carries out video recording operation currently and receives an adding instruction of a video label, obtaining label time and first identification information corresponding to the adding instruction;
generating a first video tag based on the first identification information;
adding the first video tag in first video information obtained by the video recording operation based on the tag moment;
the frame of the mobile terminal is provided with a plurality of pressure sensors, and the steps of acquiring the tag time and the first identification information corresponding to the adding instruction when the mobile terminal currently performs video recording operation and receives the adding instruction of the video tag comprise:
when a mobile terminal carries out video recording operation currently and detects a first pressing operation triggered based on a pressure sensor, acquiring a first pressing parameter of the first pressing operation, wherein the first pressing parameter comprises a first pressure value and a first pressing duration;
and when the first pressure value is larger than a preset pressure threshold value and the first pressing time length is larger than or equal to the preset time length, setting the current time as the label time, and setting the identification information of the pressure sensor triggering the pressing operation as the first identification information.
2. The video processing method of claim 1, wherein the video processing method further comprises:
when the mobile terminal carries out video playing operation currently and receives a playing instruction of a video label, acquiring second identification information corresponding to the playing instruction;
determining whether a second video tag exists in second video information corresponding to the video playing operation;
and when the second video label exists in the second video information, playing the second video information based on the label moment corresponding to the second video label.
3. The video processing method according to claim 2, wherein the step of acquiring the second identification information corresponding to the play instruction when the mobile terminal currently performs the video play operation and receives the play instruction of the video tag comprises:
when the mobile terminal carries out video playing operation currently and detects second pressing operation triggered based on the pressure sensor, second pressing parameters of the second pressing operation are obtained, wherein the second pressing parameters comprise a second pressure value and a second pressing duration;
and when the second pressure value is greater than the preset pressure threshold value and the second pressing time length is greater than or equal to the preset time length, setting the identification information of the pressure sensor triggering the second pressing operation as the second identification information.
4. The video processing method according to claim 2, wherein the step of acquiring the tag time and the first identification information corresponding to the adding instruction when the mobile terminal currently performs the video recording operation and receives the adding instruction of the video tag comprises:
when a mobile terminal carries out video recording operation currently and detects that a fingerprint detection module of the mobile terminal receives first fingerprint information, acquiring input time corresponding to the first fingerprint information;
and setting the input time as the label time, and setting the first fingerprint information as the first identification information.
5. The video processing method according to claim 4, wherein the step of acquiring the second identification information corresponding to the playing instruction when the mobile terminal currently performs the video playing operation and receives the playing instruction of the video tag comprises:
and when the mobile terminal carries out video playing operation currently and detects that the fingerprint detection module receives second fingerprint information, setting the second fingerprint information as the second identification information.
6. The video processing method according to claim 2, wherein the step of acquiring the tag time and the first identification information corresponding to the adding instruction when the mobile terminal currently performs the video recording operation and receives the adding instruction of the video tag comprises:
when a mobile terminal carries out video recording operation currently and detects a third pressing operation triggered by a pressure sensor based on the mobile terminal, acquiring the pressing times within a preset time interval corresponding to the third pressing operation;
and when the pressing times in a preset time interval meet a preset condition, setting the current time as the label time, and setting the pressing times as the first identification information.
7. The video processing method of any of claims 1 to 6, wherein the step of generating a first video tag based on the first identification information comprises:
determining whether a video tag corresponding to the first identification information exists in the first video information;
and when the video tag corresponding to the first identification information does not exist in the first video information, generating a first video tag based on the first identification information.
8. A mobile terminal, characterized in that the mobile terminal comprises: memory, processor and video processing program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the video processing method according to any of claims 1 to 7.
9. A computer-readable storage medium, having stored thereon a video processing program which, when executed by a processor, implements the steps of the video processing method according to any one of claims 1 to 7.
CN201810075139.0A 2018-01-25 2018-01-25 Video processing method, mobile terminal and computer readable storage medium Active CN108260009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810075139.0A CN108260009B (en) 2018-01-25 2018-01-25 Video processing method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810075139.0A CN108260009B (en) 2018-01-25 2018-01-25 Video processing method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108260009A CN108260009A (en) 2018-07-06
CN108260009B true CN108260009B (en) 2021-11-02

Family

ID=62741900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810075139.0A Active CN108260009B (en) 2018-01-25 2018-01-25 Video processing method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108260009B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035249A (en) * 2019-03-08 2019-07-19 视联动力信息技术股份有限公司 A kind of video gets method and apparatus ready
CN110097428B (en) * 2019-04-30 2021-08-17 北京达佳互联信息技术有限公司 Electronic order generation method, device, terminal and storage medium
CN112543368A (en) * 2019-09-20 2021-03-23 北京小米移动软件有限公司 Video processing method, video playing method, video processing device, video playing device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377203A (en) * 2012-04-18 2013-10-30 宇龙计算机通信科技(深圳)有限公司 Terminal and sound record management method
CN103780973A (en) * 2012-10-17 2014-05-07 三星电子(中国)研发中心 Video label adding method and video label adding device
CN104980677A (en) * 2014-04-02 2015-10-14 联想(北京)有限公司 Method and device for adding label into video
CN107027072A (en) * 2017-05-04 2017-08-08 深圳市金立通信设备有限公司 A kind of video marker method, terminal and computer-readable recording medium
CN107124568A (en) * 2016-02-25 2017-09-01 掌赢信息科技(上海)有限公司 A kind of video recording method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8984549B2 (en) * 2011-09-28 2015-03-17 Google Technology Holdings LLC Method for tag insertion and notification for DVR addressable advertisement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377203A (en) * 2012-04-18 2013-10-30 宇龙计算机通信科技(深圳)有限公司 Terminal and sound record management method
CN103780973A (en) * 2012-10-17 2014-05-07 三星电子(中国)研发中心 Video label adding method and video label adding device
CN104980677A (en) * 2014-04-02 2015-10-14 联想(北京)有限公司 Method and device for adding label into video
CN107124568A (en) * 2016-02-25 2017-09-01 掌赢信息科技(上海)有限公司 A kind of video recording method and electronic equipment
CN107027072A (en) * 2017-05-04 2017-08-08 深圳市金立通信设备有限公司 A kind of video marker method, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN108260009A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108572764B (en) Character input control method and device and computer readable storage medium
CN109701266B (en) Game vibration method, device, mobile terminal and computer readable storage medium
CN109195143B (en) Network access method, mobile terminal and readable storage medium
CN110180181B (en) Method and device for capturing wonderful moment video and computer readable storage medium
CN107423238B (en) Screen projection connection method and device and computer readable storage medium
CN110187808B (en) Dynamic wallpaper setting method and device and computer-readable storage medium
CN109151216B (en) Application starting method, mobile terminal, server and computer readable storage medium
CN112533189A (en) Transmission method, mobile terminal and storage medium
CN109375846B (en) Method and device for displaying breathing icon, mobile terminal and readable storage medium
CN109062465A (en) A kind of application program launching method, mobile terminal and storage medium
CN107707755B (en) Key using method, terminal and computer readable storage medium
CN108260009B (en) Video processing method, mobile terminal and computer readable storage medium
CN112637410A (en) Method, terminal and storage medium for displaying message notification
CN112437472B (en) Network switching method, equipment and computer readable storage medium
CN107678622B (en) Application icon display method, terminal and storage medium
CN107809527B (en) Method for presenting shortcut operation and electronic equipment
CN107347114B (en) Voice information receiving and sending control method and terminal
CN111970738A (en) Network switching control method, equipment and computer readable storage medium
CN108184161B (en) Video playing method, mobile terminal and computer readable storage medium
CN108183833B (en) Response processing method and device and computer readable storage medium
CN107155008B (en) A kind of display methods of communications records, terminal and computer readable storage medium
CN107404568B (en) Control switch management method and mobile terminal
CN114756187A (en) Screen-casting video image processing method and equipment and computer readable storage medium
CN110287381B (en) Page control node searching method, terminal and computer readable storage medium
CN109558049B (en) Notification message display processing method, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant