CN110379428A - A kind of information processing method and terminal device - Google Patents

A kind of information processing method and terminal device Download PDF

Info

Publication number
CN110379428A
CN110379428A CN201910640218.6A CN201910640218A CN110379428A CN 110379428 A CN110379428 A CN 110379428A CN 201910640218 A CN201910640218 A CN 201910640218A CN 110379428 A CN110379428 A CN 110379428A
Authority
CN
China
Prior art keywords
target
input
voice messaging
text information
dialog box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910640218.6A
Other languages
Chinese (zh)
Inventor
庄晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910640218.6A priority Critical patent/CN110379428A/en
Publication of CN110379428A publication Critical patent/CN110379428A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention relates to field of communication technology, a kind of information processing method and terminal device are provided, to solve in the prior art, terminal device shows the problem that voice messaging corresponds to the flexibility difference of text information.This method comprises: receiving first input of the user to the target object in target video;In response to first input, the voice messaging of the target object is converted into text information;At the video playing interface of the target video, the text information is shown.In this way, terminal device can depending on the user's operation, show the text information of the voice messaging of specified object in video playing interface, the flexibility of word-information display can be improved.

Description

A kind of information processing method and terminal device
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of information processing methods and terminal device.
Background technique
With the development of terminal device, the application field of video is more and more extensive, common in addition to movie or television play Outside, it also tends to that video information can be related in the information such as news, social activity, amusement.It is clearly regarded for the ease of user Voice messaging in frequency information, terminal device can be presented the voice messaging of personage in video clip in a manner of text.
In the prior art, terminal device usually obtains the voice messaging in video file according to fixed mode, thus aobvious Show the text information of the voice messaging.For example, terminal device obtains video file and the corresponding subtitle of the video file in advance File, and during playing video, show the text information in subtitle file.In this way, since terminal device obtains voice The mode of information is fixed, terminal device show voice messaging correspond to text information flexibility it is poor.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method and terminal device, to solve in the prior art, terminal device Show the problem that voice messaging corresponds to the flexibility difference of text information.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of information processing methods, comprising:
Receive first input of the user to the target object in target video;
In response to first input, the voice messaging of the target object is converted into text information;
At the video playing interface of the target video, the text information is shown.
Second aspect, the embodiment of the present invention also provide a kind of terminal device, comprising:
First receiving module, for receiving first input of the user to the target object in target video;
Conversion module, in response to first input, the voice messaging of the target object to be converted to text letter Breath;
First display module, for showing the text information at the video playing interface of the target video.
Optionally, first display module is specifically used for, the target pair at the video playing interface of the target video It talks about in frame, shows the text information.
Optionally, the target object includes N number of object, and the target dialogue frame includes N number of dialog box, described N number of right As being corresponded with N number of dialog box;
First display module is specifically used for:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, display i-th The text information of the voice messaging of a object;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
Optionally, first input includes the first son input at the first moment and the second son input at the second moment, institute Stating for the second moment is later than first moment;
The conversion module is specifically used for:
It is inputted in response to first son, the voice data of target object described in start recording;
In response to the second son input, stops recording, obtains the voice messaging of the target object of first time period, Period of the first time period between first moment and second moment;
The voice messaging of the target object of the first time period is converted into text information.
Optionally, the conversion module includes:
Acquisition submodule, for obtaining the voice messaging of the target object in response to first input;
First receiving submodule, for receiving the second input of user;
Transform subblock, for being inputted in response to described second, by at least partly voice messaging in the voice messaging Be converted to text information.
Optionally, the terminal device further include:
Second display module, is used for displaying target control, includes the first child control and the second son control in the target widget Part;
Second input is to the defeated of at least one child control in first child control and second child control Enter;
The transform subblock is specifically used for, in response to second input, based on first child control in the mesh The second position of the first position and second child control on control in the target widget is marked, second time period is obtained The voice messaging of the target object, the second time period are the first position corresponding third moment and the second Set the period between corresponding 4th moment;
By at least partly voice messaging of the target object of the second time period, text information is converted to.
Optionally, first display module includes:
First display sub-module, for showing at least two dialog boxes in the video playing interface of the target video;
Second receiving submodule, it is defeated to the third of the target dialogue frame at least two dialog box for receiving user Enter;
Second display sub-module in the target dialogue frame, shows the text for inputting in response to the third Information.
Optionally, include the first object and the second object in N number of object, include first pair in N number of dialog box Talk about frame and the second dialog box;
The terminal device further include:
Second receiving module, for receiving fourth input of the user to first dialog box or second dialog box;
Module is obtained, for obtaining the first tone color and described second of first object in response to the 4th input Second tone color of object;
Output module for exporting the voice messaging of first object by second tone color, and passes through described the One tone color exports the voice messaging of second object.
Optionally, the terminal device further include:
Third receiving module, for receiving fiveth input of the user to first dialog box or second dialog box;
Recovery module, in response to it is described 5th input, by first object and second object at least The tone color reduction of one object.
Optionally, the terminal device further include:
4th receiving module, for receiving sixth input of the user to the target dialogue frame;
Adjust module, in response to the 6th input, to the format of the text information in the target dialogue frame into Row adjustment.
The third aspect, the embodiment of the present invention also provide a kind of terminal device, comprising: memory, processor and are stored in On reservoir and the computer program that can run on a processor, the processor realize institute as above when executing the computer program The step in information processing method stated.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, the computer-readable storage Computer program is stored on medium, the computer program is realized when being executed by processor in information processing method as described above The step of.
In the embodiment of the present invention, first input of the user to the target object in target video is received;In response to described One input, is converted to text information for the voice messaging of the target object;At the video playing interface of the target video, show Show the text information.In this way, terminal device can show specified object depending on the user's operation in video playing interface The text information of voice messaging can be improved the flexibility of word-information display.
Detailed description of the invention
Fig. 1 is one of the flow chart of information processing method provided in an embodiment of the present invention;
Fig. 2 is the two of the flow chart of information processing method provided in an embodiment of the present invention;
Fig. 3 a to Fig. 3 i is the interface schematic diagram of terminal device provided in an embodiment of the present invention respectively;
Fig. 4 is one of the structure chart of terminal device provided in an embodiment of the present invention;
Fig. 5 is the two of the structure chart of terminal device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Referring to Fig. 1, Fig. 1 be the flow chart of information processing method provided in an embodiment of the present invention, as shown in Figure 1, include with Lower step:
Step 101 receives first input of the user to the target object in target video.
In this step, target object can be any object that can issue voice messaging in video, for example, people Animal or other animated characters in object, animation etc..
The click that first input specifically can be user's progress inputs, slidably inputs or refers to more and slidably inputs.? During video playing, user can be operated the specific position of the target object shown in video, for example, user exists With the mouth opposite sliding nearby of the personage A of two fingers in video.
Step 102 is inputted in response to described first, and the voice messaging of the target object is converted to text information.
In this step, terminal device obtains the voice messaging of target object according to the first of user the input, and by target The voice messaging of object is converted to text information.For example, terminal records target object in the first input for receiving user Voice messaging, and voice messaging is converted into text information in real time;For another example, terminal can be opened in the first input for receiving user Begin, record the voice messaging of target object, and obtain the duration of the first input, until stopping note when the first end of input The voice messaging for recording target object, is then converted to text information for the voice messaging obtained in this duration.
Step 103, the video playing interface in the target video, show the text information.
In this step, above-mentioned text information is shown in video playing interface, is watched convenient for user.Specifically, can To show above-mentioned text information according to the playback progress of video, that is, the time of word-information display and target object issue language Time it is corresponding.Embodiment of the present invention is by special gesture operation, easily in video display process, record voice letter Breath, and by editor voice section, and selection dialog template, interception voice messaging is presented to talk with box form, on the one hand Convenient interception user wants the video speech information highlighted, on the other hand shows that personage's subtitle is believed in the lively mode of entertaining Breath.
In the embodiment of the present invention, above- mentioned information processing method can be applied to terminal device, such as: mobile phone, tablet computer (Tablet Personal Computer), laptop computer (Laptop Computer), personal digital assistant (personal Digital assistant, abbreviation PDA), mobile Internet access device (Mobile Internet Device, MID) or wearable Equipment (Wearable Device) etc..
The information processing method of the embodiment of the present invention receives first input of the user to the target object in target video; In response to first input, the voice messaging of the target object is converted into text information;In the view of the target video Frequency broadcast interface shows the text information.In this way, terminal device can depending on the user's operation, in video playing interface The text information for showing the voice messaging of specified object, can be improved the flexibility of word-information display.
Referring to fig. 2, the main distinction of the present embodiment and above-described embodiment is, by the voice messaging of target object extremely Small part voice messaging is converted to text information.
Fig. 2 is the flow chart of information processing method provided in an embodiment of the present invention, as shown in Figure 2, comprising the following steps:
Step 201 receives first input of the user to the target object in target video.
Step 202 is inputted in response to described first, obtains the voice messaging of the target object.
In this step, terminal device can obtain the voice messaging of target object according to the input time of the first input. For example, terminal in the first input for receiving user, starts the voice messaging of record target object in real time;For another example, terminal can To start in the first input for receiving user, the voice messaging of target object is recorded, and obtain the duration of the first input, Until stopping recording the voice messaging of target object when the first end of input, the language obtained in this duration is then obtained Message breath.
Step 203, the second input for receiving user.
In this step, terminal device receives second and inputs, the specific click input that can be user's progress of the second input, It slidably inputs or refers to more and slidably input.Second input, which can be, is converted to text information for voice messaging for triggering Input, can also be the input for selecting at least partly voice messaging in the voice messaging of target object.
Step 204 is inputted in response to described second, and at least partly voice messaging in the voice messaging is converted to text Word information.
In this step, terminal device response second inputs, the part language in the voice messaging of available target object Message breath or whole voice messagings, and the part of speech information or whole voice messagings are converted into text information.For example, terminal Operating position based on user on control determines at least partly voice messaging in the voice messaging of target object;For another example, eventually End group determines the voice messaging of the partial time period in the voice messaging of target object in the operation duration of user.
Step 205, the video playing interface in the target video, show the text information.
The implementation of above-mentioned steps 201 and step 205 may refer to the description in above-described embodiment, to avoid repeating, Details are not described herein again.
Optionally, first input includes the first son input at the first moment and the second son input at the second moment, institute Stating for the second moment is later than first moment;
It is described to be inputted in response to described first, the voice messaging of the target object is converted into text information, comprising:
It is inputted in response to first son, the voice data of target object described in start recording;
In response to the second son input, stops recording, obtains the voice messaging of the target object of first time period, Period of the first time period between first moment and second moment;
The voice messaging of the target object of the first time period is converted into text information.
In this embodiment, terminal device response the first son input and the second son input, determine the first son input respectively Corresponding the first moment and second son input corresponding the second moment in video in video, and obtain the target pair in video As, to the voice messaging at the second moment, which being converted to text information from the first moment.Specifically, terminal response the One son input, start recording voice messaging, and the second son input is responded, terminate record, to obtain the target of first time period The voice messaging of object.
For example, as shown in Figure 3a, personage mouth progress two fingers opposite slide of the user in video clip, i.e., first Son input, terminal device response the first son input, determines the first moment of video playing, and the voice letter of the start recording personage Breath.After recording a period of time, user is closed up finger by the state being unfolded, i.e., the second son input, as shown in Figure 3b.Terminal device Response the second son input, terminates voice record, which was the second moment.Terminal device will be between the first moment and the second moment The voice messaging of personage be converted to text information.
In this way, when terminal device can determine the starting and ending of voice messaging of target object depending on the user's operation It carves, needs to be converted to the voice messaging of text convenient for user flexibility selection, can be improved operating efficiency.
The embodiment also can be applied in the corresponding embodiment of Fig. 1 and reach identical beneficial effect.
Optionally, after the voice messaging for obtaining the target object, the method also includes:
Displaying target control includes the first child control and the second child control in the target widget;
Second input is to the defeated of at least one child control in first child control and second child control Enter;
It is described to be inputted in response to described second, at least partly voice messaging in the voice messaging is converted into text letter Breath, comprising:
In response to second input, based on first position of first child control in the target widget and described The second position of second child control in the target widget obtains the voice messaging of the target object of second time period, The second time period is between the first position corresponding third moment and the second position corresponding 4th moment Period;
By at least partly voice messaging of the target object of the second time period, text information is converted to.
In this embodiment, terminal device, can be according to the behaviour of user after the voice messaging for obtaining target object Work further edits voice messaging, to obtain at least partly voice messaging in the voice messaging of target object.
Specifically, terminal device displaying target control, user can operate the child control in target widget, i.e., and Two inputs, the first child control and the second child control can change position with the progress of the second son input in target widget, eventually Position of the end equipment according to the first child control and the second child control in target widget, available at least partly language to be intercepted Message breath.
For example, as shown in Figure 3c, terminal device displaying target control 1 in video clip includes first in target widget 1 Child control 11a and the second child control 11b.User can be to any height control in the first child control 11a and the second child control 11b Part is operated, and is moved in target widget to control child control.Terminal device can be according to the first child control and Total duration of the voice messaging of the position and target object of two child controls can determine the part of speech letter that needs intercept The duration and start/stop time of breath, thus fetching portion voice messaging.For example, the available first child control institute of terminal device Position be 00:01:20, the corresponding video in terminal device available second child control position at the video playing moment Playing time is 00:02:20 (not shown), to obtain the voice letter for the personage that the video between two moment is included Breath.After user's operation, the broadcast button 12 that user can show target widget 1 is operated, and terminal device plays At least partly voice messaging of interception, in order to which user carries out preview.In playing process, broadcast button 12 be switched to pause by Button 13, as shown in Figure 3d, user can operate pause button 13, to control playback progress at any time.
In this way, user can accurately be adjusted based on the voice messaging further progress obtained, selected convenient for user flexibility Part of speech information is selected, so that the voice messaging is converted to text information.
Optionally, the method also includes:
Mark is deleted in display;
It receives to the 7th input for deleting mark;
In response to the 7th input, the voice messaging of the target object is deleted.
In this embodiment, terminal device can show deletion mark after the voice messaging for obtaining target object.With Family to delete mark operate can triggering terminal equipment voice messaging deleted.
For example, as shown in Figure 3 e, terminal device shows deletion mark 2 in video clip, user can identify to deleting It is operated, i.e., the 7th input, for example, clicking operation or pressing operation etc..The 7th input of terminal device response, the mesh that will acquire The voice messaging for marking object is deleted, or at least partly voice object that will acquire is deleted.In this way, convenient for user to voice messaging It is edited, improves operating flexibility.
Optionally, the video playing interface in the target video, shows the text information, comprising:
In the target dialogue frame at the video playing interface of the target video, the text information is shown.
In this embodiment, text information can be shown in dialog box in video.It is possible to further according to people The voice of object scrolls corresponding text information in dialog box.
Wherein, it is right to can be the one kind selected in multiple dialog templates depending on the user's operation for above-mentioned target dialogue frame Frame is talked about, for example, as illustrated in figure 3f, the corresponding text of the voice messaging of personage is shown in dialog box, with personage's discharge dialog box Form serve as subtitle, target dialogue frame can also be dialogue of the terminal based on video scene and video playing interface Auto-matching Frame.
Above-mentioned target dialogue frame can also be triggering display depending on the user's operation.For example, user is to as shown in Figure 3c Broadcast button 12 carry out long press operation, terminal device shows multiple dialog templates in video clip, for user's progress Selection.After user selects dialog box, the dialog box that terminal device is selected according to user plays demonstration video, and in video Text is shown in the broadcasting frame.
In this way, showing text information in dialog box, the interest of video playing can be improved, can vivo show people The caption information of object can be improved the effect of video.
Optionally, described in the target dialogue frame at the video playing interface of the target video, show the text letter Breath, comprising:
In the video playing interface of the target video, at least two dialog boxes are shown;
User is received to input the third of the target dialogue frame at least two dialog box;
It is inputted in response to the third, in the target dialogue frame, shows the text information.
In this embodiment, terminal shows at least two dialog boxes in the broadcast interface of target video, and user can be with Third input is carried out based on the dialog box having been displayed, to select any dialog box.
For example, as shown in figure 3g, terminal shows multiple dialog boxes on video playing interface, and the pattern of each dialog box can With difference.User can be in the dialog box having been displayed, selection target dialog box, and terminal is in the target dialogue frame that user selects Show text information.
In this way, being selected convenient for user based on the dialog box having been displayed, so that the effect of needs is obtained, to improve view The display effect of frequency.By adding dialog box to particular persons, and it can be converted into text in real time, be shown in dialog box, energy Enough improve the effect and interest of video.
Optionally, described in the target dialogue frame at the video playing interface of the target video, show the text letter After breath, the method also includes:
Receive sixth input of the user to the target dialogue frame;
In response to the 6th input, the format of the text information in the target dialogue frame is adjusted.
In this embodiment, user operates target dialogue frame, i.e., the 6th input, so as to in dialog box Text format, for example, the font of text, font size, display effect etc. are adjusted.6th input specifically can be click behaviour Make either long press operation etc..
For example, as illustrated in figure 3h, user operates target dialogue frame, terminal enters text edit mode, and pops up Font format dialog box, user can choose corresponding font format, color, size etc..It is edited to text formatting During, terminal device can edit the content of text.In this way, convenient for user to the display effect of text information into Row control, to improve the result of broadcast of video.
Optionally, the target object includes N number of object, and the target dialogue frame includes N number of dialog box, described N number of right As being corresponded with N number of dialog box;
It is described in the target dialogue frame at the video playing interface of the target video, show the text information, comprising:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, display i-th The text information of the voice messaging of a object;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
It in this embodiment, include the multiple objects for issuing voice in video, terminal can be by the voice of different objects Information is shown in corresponding dialog box.
For example, including man and Ms in video as shown in figure 3i, terminal believes the corresponding text of the voice messaging of man Breath is shown in a dialog box, by the corresponding word-information display of the voice messaging of Ms in another dialog box.
In this way, the first object and the second object can show the effect of dialogue on the screen, it can be improved interest, mention The effect of high video.
Optionally, include the first object and the second object in N number of object, include first pair in N number of dialog box Talk about frame and the second dialog box;
It is described in the video playing interface of the target video, in i-th of dialog box in N number of dialog box, show After the text information for showing the voice messaging of i-th of object, the method also includes:
Receive fourth input of the user to first dialog box or second dialog box;
In response to the 4th input, the first tone color of first object and the second sound of second object are obtained Color;
The voice messaging of first object is exported by second tone color, and by described in first tone color output The voice messaging of second object.
In this embodiment, user can be operated with any dialog box in multiple dialog boxes, to talk with to first For frame or the second dialog box are operated.Above-mentioned 4th input can be slide, pressing operation or voice input Deng.
Terminal device can extract the first tone color of the first object and the second sound of the second object according to the input of user Color, and the tone color of the first object and the second object is exchanged, i.e., the voice messaging of the first object is exported by the second tone color, and lead to Cross the voice messaging that the first tone color exports the second object.
For example, including man and Ms, the corresponding word-information display of the voice messaging of man in video as shown in figure 3g In the first dialog box, the corresponding word-information display of the voice messaging of Ms is in the second dialog box.User is sliding from the first dialog box It moves to the second dialog box, then terminal device switches the tone color of man and Ms, voice messaging of the terminal device in subsequent output In, the voice messaging of man's output is the tone color of Ms, and the voice messaging of Ms's output is the tone color of man.
In such manner, it is possible to increase the interaction with user, the interest of video can be improved.
Optionally, the voice messaging that first object is exported by second tone color, and pass through described first After tone color exports the voice messaging of second object, the method also includes:
Receive fiveth input of the user to first dialog box or second dialog box;
In response to the 5th input, by the tone color of at least one object in first object and second object Reduction.
In this embodiment, user can operate the first dialog box or the second dialog box, can also be simultaneously First dialog box and the second dialog box are operated, i.e., the 5th input, for example, slide, pressing operation, two fingers sliding behaviour Work or voice input etc..
The 5th input of terminal device response, the tone color of the first object is restored, or the tone color of the second object is restored, or Person simultaneously restores the tone color of the first object and the second object.In this way, terminal device can restore object depending on the user's operation Tone color, can be improved the interest of video.
Optionally, described in the target dialogue frame at the video playing interface of the target video, show the text letter After breath, the method also includes:
Receive the 8th input to the dialog box;
In response to the 8th input, the dialog box is hidden.
In this embodiment, the 8th input can be slide, pressing operation or the voice input to dialog box Deng.Terminal device responds the operation of user, dialog box can be hidden, and can also while hide the text information in dialog box, Or dialog box is shown that the mode of text information is switched to other modes and shows text information etc..
In this way, controlling convenient for user the display of the text information in video, operating flexibility is improved.
The information processing method of the embodiment of the present invention is shown in preset dialog box on the basis of Fig. 1 corresponding embodiment Show text information, can be improved the display effect of video.
The embodiment of the present invention, by specific gesture operation, to intercept the sound bite of corresponding video, by popping up personage Dialog box is shown, has highlighted special sound dialogue, while increasing interest.It can also be opened for particular persons in the overall situation Voice dialogue frame, while by way of dragging dialog box, convert the tone color of personage.And by way of dragging dialog box, Realize the transformation of voice tone color.Enrich the interest of video speech and Subtitle Demonstration.
Referring to fig. 4, Fig. 4 is the structure chart of terminal device provided in an embodiment of the present invention, as shown in figure 4, terminal device 400 Include:
First receiving module 401, for receiving first input of the user to the target object in target video;
Conversion module 402, in response to first input, the voice messaging of the target object to be converted to text Information;
First display module 403, for showing the text information at the video playing interface of the target video.
Optionally, first display module is specifically used for, the target pair at the video playing interface of the target video It talks about in frame, shows the text information.
Optionally, the target object includes N number of object, and the target dialogue frame includes N number of dialog box, described N number of right As being corresponded with N number of dialog box;
First display module is specifically used for:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, display i-th The text information of the voice messaging of a object;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
Optionally, first input includes the first son input at the first moment and the second son input at the second moment, institute Stating for the second moment is later than first moment;
The conversion module is specifically used for:
It is inputted in response to first son, the voice data of target object described in start recording;
In response to the second son input, stops recording, obtains the voice messaging of the target object of first time period, Period of the first time period between first moment and second moment;
The voice messaging of the target object of the first time period is converted into text information.
Optionally, the conversion module includes:
Acquisition submodule, for obtaining the voice messaging of the target object in response to first input;
First receiving submodule, for receiving the second input of user;
Transform subblock, for being inputted in response to described second, by at least partly voice messaging in the voice messaging Be converted to text information.
Optionally, the terminal device further include:
Second display module, is used for displaying target control, includes the first child control and the second son control in the target widget Part;
Second input is to the defeated of at least one child control in first child control and second child control Enter;
The transform subblock is specifically used for, in response to second input, based on first child control in the mesh The second position of the first position and second child control on control in the target widget is marked, second time period is obtained The voice messaging of the target object, the second time period are the first position corresponding third moment and the second Set the period between corresponding 4th moment;
By at least partly voice messaging of the target object of the second time period, text information is converted to.
Optionally, first display module includes:
First display sub-module, for showing at least two dialog boxes in the video playing interface of the target video;
Second receiving submodule, it is defeated to the third of the target dialogue frame at least two dialog box for receiving user Enter;
Second display sub-module in the target dialogue frame, shows the text for inputting in response to the third Information.
Optionally, include the first object and the second object in N number of object, include first pair in N number of dialog box Talk about frame and the second dialog box;
The terminal device further include:
Second receiving module, for receiving fourth input of the user to first dialog box or second dialog box;
Module is obtained, for obtaining the first tone color and described second of first object in response to the 4th input Second tone color of object;
Output module for exporting the voice messaging of first object by second tone color, and passes through described the One tone color exports the voice messaging of second object.
Optionally, the terminal device further include:
Third receiving module, for receiving fiveth input of the user to first dialog box or second dialog box;
Recovery module, in response to it is described 5th input, by first object and second object at least The tone color reduction of one object.
Optionally, the terminal device further include:
4th receiving module, for receiving sixth input of the user to the target dialogue frame;
Adjust module, in response to the 6th input, to the format of the text information in the target dialogue frame into Row adjustment.
Terminal device 400 can be realized each process that terminal device is realized in above method embodiment, to avoid repeating, Which is not described herein again.
The terminal device 400 of the embodiment of the present invention, terminal device can depending on the user's operation, in video playing interface The text information for showing the voice messaging of specified object, can be improved the flexibility of word-information display.
A kind of hardware structural diagram of Fig. 5 terminal device of each embodiment to realize the present invention, the terminal device 500 Including but not limited to: radio frequency unit 501, audio output unit 503, input unit 504, sensor 505, is shown network module 502 Show the components such as unit 506, user input unit 507, interface unit 508, memory 509, processor 510 and power supply 511. It will be understood by those skilled in the art that terminal device structure shown in Fig. 5 does not constitute the restriction to terminal device, terminal is set Standby may include perhaps combining certain components or different component layouts than illustrating more or fewer components.In the present invention In embodiment, terminal device include but is not limited to mobile phone, tablet computer, laptop, palm PC, vehicle mobile terminals, Wearable device and pedometer etc..
Wherein, processor 510 is used for:
Control the first input that user input unit 507 receives user to the target object in target video;
In response to first input, the voice messaging of the target object is converted into text information;
Display unit 506 is controlled at the video playing interface of the target video, shows the text information.
In this way, terminal device can depending on the user's operation, show the voice letter of specified object in video playing interface The text information of breath can be improved the flexibility of word-information display.
Optionally, first input includes the first son input at the first moment and the second son input at the second moment, institute Stating for the second moment is later than first moment;Processor 510 inputs described in executing in response to described first, by the target object Voice messaging be converted to text information, comprising:
It is inputted in response to first son, the voice data of target object described in start recording;
In response to the second son input, stops recording, obtains the voice messaging of the target object of first time period, Period of the first time period between first moment and second moment;
The voice messaging of the target object of the first time period is converted into text information.
Optionally, processor 510 executes described in response to first input, and the voice messaging of the target object is turned It is changed to text information, comprising:
In response to first input, the voice messaging of the target object is obtained;
Receive the second input of user;
In response to second input, at least partly voice messaging in the voice messaging is converted into text information.
Optionally, processor 510 is also used to:
Displaying target control includes the first child control and the second child control in the target widget;
Second input is to the defeated of at least one child control in first child control and second child control Enter;
Processor 510 inputs described in executing in response to described second, and at least partly voice in the voice messaging is believed Breath is converted to text information, comprising:
In response to second input, based on first position of first child control in the target widget and described The second position of second child control in the target widget obtains the voice messaging of the target object of second time period, The second time period is between the first position corresponding third moment and the second position corresponding 4th moment Period;
By at least partly voice messaging of the target object of the second time period, text information is converted to.
Optionally, processor 510 executes described at the video playing interface of the target video, shows the text letter Breath, comprising:
In the target dialogue frame at the video playing interface of the target video, the text information is shown.
Optionally, processor 510 executes described in the target dialogue frame at the video playing interface of the target video, shows Show the text information, comprising:
In the video playing interface of the target video, at least two dialog boxes are shown;
User is received to input the third of the target dialogue frame at least two dialog box;
It is inputted in response to the third, in the target dialogue frame, shows the text information.
Optionally, the target object includes N number of object, and the target dialogue frame includes N number of dialog box, described N number of right As being corresponded with N number of dialog box;Processor 510 executes the target at the video playing interface of the target video In dialog box, the text information is shown, comprising:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, display i-th The text information of the voice messaging of a object;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
Optionally, include the first object and the second object in N number of object, include first pair in N number of dialog box Talk about frame and the second dialog box;Processor 510 executes described in the video playing interface of the target video, N number of dialogue In i-th of dialog box in frame, after the text information for showing the voice messaging of i-th of object, the method also includes:
Receive fourth input of the user to first dialog box or second dialog box;
In response to the 4th input, the first tone color of first object and the second sound of second object are obtained Color;
The voice messaging of first object is exported by second tone color, and by described in first tone color output The voice messaging of second object.
Optionally, processor 510 executes the voice messaging that first object is exported by second tone color, and After the voice messaging for exporting second object by first tone color, it is also used to:
Receive fiveth input of the user to first dialog box or second dialog box;
In response to the 5th input, by the tone color of at least one object in first object and second object Reduction.
Optionally, processor 510 is also used to:
Receive sixth input of the user to the target dialogue frame;
In response to the 6th input, the format of the text information in the target dialogue frame is adjusted.
It should be understood that the embodiment of the present invention in, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 510 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set Standby communication.
Terminal device provides wireless broadband internet by network module 502 for user and accesses, and such as user is helped to receive It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be received by radio frequency unit 501 or network module 502 or in memory 509 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 503 can also provide and end The relevant audio output of specific function that end equipment 500 executes is (for example, call signal receives sound, message sink sound etc. Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is for receiving audio or video signal.Input unit 504 may include graphics processor (Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 506.Through graphics processor 5041, treated that picture frame can be deposited Storage is sent in memory 509 (or other storage mediums) or via radio frequency unit 501 or network module 502.Mike Wind 5042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 501 is converted in the case where telephone calling model.
Terminal device 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 5061, and proximity sensor can close when terminal device 500 is moved in one's ear Display panel 5061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify terminal device posture (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 506 is for showing information input by user or being supplied to the information of user.Display unit 506 can wrap Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 5061.
User input unit 507 can be used for receiving the number or character information of input, and generate the use with terminal device Family setting and the related key signals input of function control.Specifically, user input unit 507 include touch panel 5071 and Other input equipments 5072.Touch panel 5071, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 5071 or in touch panel 5071 Neighbouring operation).Touch panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area It manages the order that device 510 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Seed type realizes touch panel 5071.In addition to touch panel 5071, user input unit 507 can also include other input equipments 5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button, Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 5071 can be covered on display panel 5061, when touch panel 5071 is detected at it On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touching The type for touching event provides corresponding visual output on display panel 5061.Although in Fig. 5, touch panel 5071 and display Panel 5061 is the function that outputs and inputs of realizing terminal device as two independent components, but in some embodiments In, can be integrated by touch panel 5071 and display panel 5061 and realize the function that outputs and inputs of terminal device, it is specific this Place is without limitation.
Interface unit 508 is the interface that external device (ED) is connect with terminal device 500.For example, external device (ED) may include having Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module Mouthful etc..Interface unit 508 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and By one or more elements that the input received is transferred in terminal device 500 or can be used in 500 He of terminal device Data are transmitted between external device (ED).
Memory 509 can be used for storing software program and various data.Memory 509 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 509 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of terminal device, utilizes each of various interfaces and the entire terminal device of connection A part by running or execute the software program and/or module that are stored in memory 509, and calls and is stored in storage Data in device 509 execute the various functions and processing data of terminal device, to carry out integral monitoring to terminal device.Place Managing device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Terminal device 500 can also include the power supply 511 (such as battery) powered to all parts, it is preferred that power supply 511 Can be logically contiguous by power-supply management system and processor 510, to realize management charging by power-supply management system, put The functions such as electricity and power managed.
In addition, terminal device 500 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal device, including processor 510, and memory 509 is stored in On memory 509 and the computer program that can run on the processor 510, the computer program are executed by processor 510 Each process in Shi Shixian above- mentioned information processing method embodiment, and identical technical effect can be reached, to avoid repeating, this In repeat no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program realizes each process of above- mentioned information processing method embodiment, and energy when the computer program is executed by processor Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (15)

1. a kind of information processing method characterized by comprising
Receive first input of the user to the target object in target video;
In response to first input, the voice messaging of the target object is converted into text information;
At the video playing interface of the target video, the text information is shown.
2. the method according to claim 1, wherein first input includes the first son input at the first moment With the second son input at the second moment, second moment is later than first moment;
It is described to be inputted in response to described first, the voice messaging of the target object is converted into text information, comprising:
It is inputted in response to first son, the voice data of target object described in start recording;
In response to the second son input, stops recording, obtain the voice messaging of the target object of first time period, it is described Period of the first time period between first moment and second moment;
The voice messaging of the target object of the first time period is converted into text information.
3. the method according to claim 1, wherein described input in response to described first, by the target pair The voice messaging of elephant is converted to text information, comprising:
In response to first input, the voice messaging of the target object is obtained;
Receive the second input of user;
In response to second input, at least partly voice messaging in the voice messaging is converted into text information.
4. according to the method described in claim 3, it is characterized in that, after the voice messaging for obtaining the target object, The method also includes:
Displaying target control includes the first child control and the second child control in the target widget;
Second input is the input at least one child control in first child control and second child control;
It is described that at least partly voice messaging in the voice messaging is converted into text information in response to second input, Include:
First position and described second in response to second input, based on first child control in the target widget The second position of the child control in the target widget obtains the voice messaging of the target object of second time period, described Time of the second time period between the first position corresponding third moment and the second position corresponding 4th moment Section;
By at least partly voice messaging of the target object of the second time period, text information is converted to.
5. the method according to claim 1, wherein the video playing interface in the target video, shows Show the text information, comprising:
In the target dialogue frame at the video playing interface of the target video, the text information is shown.
6. according to the method described in claim 5, it is characterized in that, the mesh at the video playing interface in the target video It marks in dialog box, shows the text information, comprising:
In the video playing interface of the target video, at least two dialog boxes are shown;
User is received to input the third of the target dialogue frame at least two dialog box;
It is inputted in response to the third, in the target dialogue frame, shows the text information.
7. according to the method described in claim 5, it is characterized in that, the target object includes N number of object, the target dialogue Frame includes N number of dialog box, and N number of object and N number of dialog box correspond;
It is described in the target dialogue frame at the video playing interface of the target video, show the text information, comprising:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, i-th pair is shown The text information of the voice messaging of elephant;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
8. the method according to the description of claim 7 is characterized in that in N number of object include the first object and the second object, It include the first dialog box and the second dialog box in N number of dialog box;
It is described in the video playing interface of the target video, in i-th of dialog box in N number of dialog box, display i-th After the text information of the voice messaging of a object, the method also includes:
Receive fourth input of the user to first dialog box or second dialog box;
In response to the 4th input, the first tone color of first object and the second tone color of second object are obtained;
The voice messaging of first object is exported by second tone color, and passes through first tone color output described second The voice messaging of object.
9. according to the method described in claim 8, it is characterized in that, described export first object by second tone color Voice messaging, and after exporting by first tone color voice messaging of second object, the method also includes:
Receive fiveth input of the user to first dialog box or second dialog box;
In response to the 5th input, also by the tone color of at least one object in first object and second object It is former.
10. according to the method described in claim 5, it is characterized in that, the video playing interface in the target video In target dialogue frame, after showing the text information, the method also includes:
Receive sixth input of the user to the target dialogue frame;
In response to the 6th input, the format of the text information in the target dialogue frame is adjusted.
11. a kind of terminal device characterized by comprising
First receiving module, for receiving first input of the user to the target object in target video;
Conversion module, in response to first input, the voice messaging of the target object to be converted to text information;
First display module, for showing the text information at the video playing interface of the target video.
12. terminal device according to claim 11, which is characterized in that first display module is specifically used for, in institute In the target dialogue frame for stating the video playing interface of target video, the text information is shown.
13. terminal device according to claim 12, which is characterized in that the target object includes N number of object, the mesh Marking dialog box includes N number of dialog box, and N number of object and N number of dialog box correspond;
First display module is specifically used for:
In the video playing interface of the target video, in i-th of dialog box in N number of dialog box, i-th pair is shown The text information of the voice messaging of elephant;
Wherein, N is integer greater than 1, and i is the integer greater than 0, and i≤N.
14. a kind of terminal device characterized by comprising memory, processor and storage are on a memory and can be in processor The computer program of upper operation, the processor are realized when executing the computer program such as any one of claims 1 to 10 institute The step in information processing method stated.
15. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence, the computer program are realized when being executed by processor in information processing method as described in any one of claim 1 to 10 The step of.
CN201910640218.6A 2019-07-16 2019-07-16 A kind of information processing method and terminal device Pending CN110379428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910640218.6A CN110379428A (en) 2019-07-16 2019-07-16 A kind of information processing method and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910640218.6A CN110379428A (en) 2019-07-16 2019-07-16 A kind of information processing method and terminal device

Publications (1)

Publication Number Publication Date
CN110379428A true CN110379428A (en) 2019-10-25

Family

ID=68253411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910640218.6A Pending CN110379428A (en) 2019-07-16 2019-07-16 A kind of information processing method and terminal device

Country Status (1)

Country Link
CN (1) CN110379428A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010608A (en) * 2019-12-20 2020-04-14 维沃移动通信有限公司 Video playing method and electronic equipment
CN111491212A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video processing method and electronic equipment
CN113207025A (en) * 2021-04-30 2021-08-03 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN116095233A (en) * 2022-05-20 2023-05-09 荣耀终端有限公司 Barrier-free conversation method and terminal equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1573928A (en) * 2003-05-29 2005-02-02 微软公司 Semantic object synchronous understanding implemented with speech application language tags
CN101518055A (en) * 2006-09-21 2009-08-26 松下电器产业株式会社 Subtitle generation device, subtitle generation method, and subtitle generation program
CN104049885A (en) * 2013-03-15 2014-09-17 Lg电子株式会社 Mobile terminal and method of controlling the mobile terminal
US20170083633A1 (en) * 2015-09-21 2017-03-23 International Business Machines Corporation System for suggesting search terms
CN107241616A (en) * 2017-06-09 2017-10-10 腾讯科技(深圳)有限公司 video lines extracting method, device and storage medium
CN108572764A (en) * 2018-03-13 2018-09-25 努比亚技术有限公司 A kind of word input control method, equipment and computer readable storage medium
CN109785845A (en) * 2019-01-28 2019-05-21 百度在线网络技术(北京)有限公司 Method of speech processing, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1573928A (en) * 2003-05-29 2005-02-02 微软公司 Semantic object synchronous understanding implemented with speech application language tags
CN101518055A (en) * 2006-09-21 2009-08-26 松下电器产业株式会社 Subtitle generation device, subtitle generation method, and subtitle generation program
CN104049885A (en) * 2013-03-15 2014-09-17 Lg电子株式会社 Mobile terminal and method of controlling the mobile terminal
US20170083633A1 (en) * 2015-09-21 2017-03-23 International Business Machines Corporation System for suggesting search terms
CN107241616A (en) * 2017-06-09 2017-10-10 腾讯科技(深圳)有限公司 video lines extracting method, device and storage medium
CN108572764A (en) * 2018-03-13 2018-09-25 努比亚技术有限公司 A kind of word input control method, equipment and computer readable storage medium
CN109785845A (en) * 2019-01-28 2019-05-21 百度在线网络技术(北京)有限公司 Method of speech processing, device and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010608A (en) * 2019-12-20 2020-04-14 维沃移动通信有限公司 Video playing method and electronic equipment
CN111491212A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video processing method and electronic equipment
CN113207025A (en) * 2021-04-30 2021-08-03 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113207025B (en) * 2021-04-30 2023-03-28 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN116095233A (en) * 2022-05-20 2023-05-09 荣耀终端有限公司 Barrier-free conversation method and terminal equipment

Similar Documents

Publication Publication Date Title
CN108737904B (en) Video data processing method and mobile terminal
CN110379428A (en) A kind of information processing method and terminal device
CN108108214A (en) A kind of guiding method of operating, device and mobile terminal
CN107613362A (en) A kind of video display control method and mobile terminal
CN107864353B (en) A kind of video recording method and mobile terminal
CN108920239A (en) A kind of long screenshotss method and mobile terminal
CN109151546A (en) A kind of method for processing video frequency, terminal and computer readable storage medium
CN107943390A (en) A kind of word clone method and mobile terminal
CN109857905A (en) A kind of video editing method and terminal device
CN108519850A (en) A kind of keyboard interface display methods and mobile terminal
CN108712577A (en) A kind of call mode switching method and terminal device
CN109167884A (en) A kind of method of servicing and device based on user speech
CN109871164A (en) A kind of message method and terminal device
CN110515521A (en) A kind of screenshot method and mobile terminal
CN108307106A (en) A kind of image processing method, device and mobile terminal
CN109215655A (en) The method and mobile terminal of text are added in video
CN109710165A (en) A kind of drawing processing method and mobile terminal
CN109462768A (en) A kind of caption presentation method and terminal device
CN108536366A (en) A kind of application window method of adjustment and terminal
CN109922294A (en) A kind of method for processing video frequency and mobile terminal
CN109102555A (en) A kind of image edit method and terminal
CN108182031A (en) A kind of photographic method, terminal and computer readable storage medium
CN110442279A (en) A kind of message method and mobile terminal
CN109993821A (en) A kind of expression playback method and mobile terminal
CN109981904A (en) A kind of method for controlling volume and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191025