CN108763475B - Recording method, recording device and terminal equipment - Google Patents

Recording method, recording device and terminal equipment Download PDF

Info

Publication number
CN108763475B
CN108763475B CN201810532492.7A CN201810532492A CN108763475B CN 108763475 B CN108763475 B CN 108763475B CN 201810532492 A CN201810532492 A CN 201810532492A CN 108763475 B CN108763475 B CN 108763475B
Authority
CN
China
Prior art keywords
recording
information
segment
index information
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810532492.7A
Other languages
Chinese (zh)
Other versions
CN108763475A (en
Inventor
王建辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810532492.7A priority Critical patent/CN108763475B/en
Publication of CN108763475A publication Critical patent/CN108763475A/en
Application granted granted Critical
Publication of CN108763475B publication Critical patent/CN108763475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a recording method, a recording device and a terminal device, wherein the method comprises the following steps: acquiring a recording segment associated with the figure identification information, wherein the recording segment carries a time node identification representing a starting time point and an ending time point; generating index information corresponding to the recording segment according to the figure identification information and the time node identification; and generating a recording file according to the recording fragment and the index information. The invention can improve the searching efficiency of the user for the recorded file and improve the user experience.

Description

Recording method, recording device and terminal equipment
Technical Field
The present invention relates to the field of terminals, and in particular, to a recording method, a recording apparatus, and a terminal device.
Background
In modern life, voice and video information of scenes such as conferences, training and the like generally need to be recorded and stored. For example, meetings within a company often require recording for subsequent recording and summarization; the training content of the education institution often needs to be recorded as the material of the online lecture.
At present, the content of the meeting, the training and the like is recorded all at once from beginning to end mainly through recording equipment (such as a video camera and the like). However, when a user needs to search for a required content from a recorded audio/video file, the user needs to repeatedly drag the progress bar of the audio/video file for searching, and particularly when the content of the recorded audio/video file is large, the efficiency of searching for a target audio/video clip of the user is seriously reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a recording method, a recording device and terminal equipment, so as to solve the problem that a user has low efficiency of searching a target segment in the existing recorded video or audio.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a recording method is provided, which includes:
acquiring a recording segment associated with the figure identification information, wherein the recording segment carries a time node identification representing a starting time point and an ending time point;
generating index information corresponding to the recording segment according to the figure identification information and the time node identification;
and generating a recording file according to the recording fragment and the index information.
In a second aspect, there is provided a recording apparatus comprising:
the acquisition module is used for acquiring a recording segment associated with the figure identification information, wherein the recording segment carries time node identifications representing a starting time point and an ending time point;
the index module is used for generating index information corresponding to the recording segment according to the figure identification information and the time node identification;
and the file module is used for generating a recording file according to the recording fragment and the index information.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, in the recording process of a conference, training and the like, a recording segment related to the character identification information can be obtained, the recording segment carries time node identifications representing a starting time point and an ending time point, index information corresponding to the recording segment can be generated according to the character identification information and the time node identifications, and a recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flow chart of a recording method of one embodiment of the present invention;
FIG. 2 is a diagram of first index information according to the present invention;
FIG. 3 is a diagram illustrating selection of first index information according to the present invention;
FIG. 4 is a second schematic diagram illustrating the selection of the first index information according to the present invention;
FIG. 5 is a third schematic diagram illustrating selection of first index information according to the present invention;
FIG. 6 is a diagram illustrating the selection of third index information according to the present invention;
fig. 7 is a flowchart of a playing method of a recording file according to an embodiment of the present invention;
fig. 8 is a block diagram of a recording apparatus according to an embodiment of the present invention;
fig. 9 is a structural diagram of a playback apparatus that records a file according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware configuration of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flow chart of a recording method according to an embodiment of the present invention. As shown in fig. 1, the method includes:
and step 120, acquiring a recording segment associated with the person identification information, wherein the recording segment carries time node identifications representing a starting time point and an ending time point.
The personal identification information may be at least one of name information, image information, tone information, and serial number of the person.
For example, before a meeting or training begins, the recording apparatus may collect name information, image information, and tone information of each person participating in the meeting or training, and add a serial number to each person. The timbre represents the sound characteristic of each character. The sequence number may be an arabic number, for example, the sequence number may be incremented from 1, with the largest sequence number representing the total number of people participating in the conference or training.
At the start of recording, each person who utters a sound can be identified from the image or the tone. For example, the name of the person who uttered the sound may be determined from the image or the tone.
Since recording is advanced in terms of time, the recorded segments of each character can be recorded in turn in accordance with the chronological order of the sound produced by each character. Wherein, the start time point and the end time point of the sound of each person can be marked by adopting the time node identification, so as to record the recording segment of each person.
And 140, generating index information corresponding to the recording segment according to the person identification information and the time node identification.
The index information comprises the character identification information and the time node identification, and the character identification information can be used for the user to select at least one of the character identification information and the time node identification, such as at least one of a name, an image, a tone and a serial number, so that the user can quickly determine the character to be selected. The time node identification can be used for the user to quickly select the recording segment of the corresponding person in at least one time period. Thereby enabling the user to quickly select a recorded segment of a person for a certain time period.
And 160, generating a recording file according to the recording segment and the index information.
The recording file may be one or more than one. The recording file may be stored according to size.
In the embodiment of the invention, in the recording process of a conference, training and the like, a recording segment related to the character identification information can be obtained, the recording segment carries time node identifications representing a starting time point and an ending time point, index information corresponding to the recording segment can be generated according to the character identification information and the time node identifications, and a recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
In an implementation manner of this embodiment, the personal identification information may be pre-established in different manners. For example, for an enterprise meeting, data may be called directly from the database of attendance to establish person identification information for each person participating in the meeting, including at least one of name, image, timbre, serial number. In another case, the recording device may capture an image and a serial number of each person participating in the conference according to an invitation list (i.e., name) of the conference. At the start of the conference, the name of the person who speaks is determined from the image (i.e., it is determined who speaks), and the tone of the person is determined from the voice of the person speaking, thereby creating personal identification information for each person.
Where for the person identification information, the name is the basic attribute of the person, the image and tone can be used to identify the user when the person speaks, and the serial number can determine the total number of people participating in the conference. The image is preferably a human face image of a real person, and the tone is the sound characteristic information of the person. Preferably, the combination of the image and the tone allows a person speaking to be determined more accurately. Therefore, it is preferable that recording can be performed based on the above four pieces of personal identification information.
In one embodiment of the present invention, before the recording segment associated with the personal identification information is acquired, the personal identification information may be determined according to the received sound information of the person. Wherein the personal identification information may be stored in the recording device in advance. In the recording process of meeting and training, when a person speaks, the person can be identified according to the person identification information of the person while the person speaks. For example, an image of the person may be captured while the person is speaking and compared against an image in the stored personal identification information to determine the personal identification information that is speaking. While the person speaks, the tone of the person can be collected and compared and matched with the tone in the stored person identification information to determine the person identification information which is speaking.
In an implementation manner of this embodiment, step 120 includes:
recording time node identifications of the recorded segments at the starting time point and the ending time point according to the figure identification information;
and acquiring the recording segment according to the time node identification.
In the recording process, when a person speaks, the starting time point and the ending time point of the person speaking can be recorded. The start time point and the end time point are time node identifiers. The starting time point may be a starting point of a person when speaking, and the ending time point may be an ending point of the person when speaking is ended. Of course, the end time point may be understood as a starting point when another person speaks after the person speaks.
From the start time point and the end time point of the sound of each character, a recorded segment of the character can be generated. In which a person may speak a plurality of times during recording, and thus, the recorded segment of the person may have a plurality of.
In an implementation manner of this embodiment, step 140 includes:
generating first index information of the recording segment according to at least one of name information, image information, tone information and sequence number information in the person identification information;
and generating second index information of the recording segment according to the time node identification.
Since the first index information is at least one of name information, image information, tone information, and number information in the personal identification information, the user can select one of the first index information and quickly determine the person to be selected. For example, for the following plurality of first index information: zhang san & zhan san person portrait & idiom timbre &1, li si & li si person portrait & idiom timbre &2, wangwu wu & wang wu person portrait & idiom timbre &3, zhao liu & zhao liu portrait & idiom timbre & 4. Where, & is a separator between each piece of personal identification information. The user can select at least one of the names of Zhang three, Li four, Wang five and Zhao six to determine a specific character, and the user can also select the portrait, tone or serial number of a certain character to determine the specific character.
The second index information includes time node identifications, i.e., a start time point and an end time point of the recording section. After the user determines the person according to the first index information, the user can determine the recording segment of the person in a specific certain time period according to the second index information.
In this embodiment, the second index information is under the first index information, and through the directory hierarchical structure relationship, the user may first determine a character from the first index information, and then select a recording segment of the character in a certain time period by the second index information.
Fig. 2 is a schematic diagram of first index information according to the present invention. As shown in fig. 2, the first index information may include a plurality of pieces of personal identification information such as an index number of arabic numerals, a name (name), an image, and a tone. The number of pieces of personal identification information in the first index information may be increased or deleted as necessary, and the present embodiment is not limited to the number of pieces of personal identification information in the first index information.
Fig. 3 is a diagram illustrating the selection of the first index information according to the present invention. As shown in fig. 3, the user can select any one of the numbers 1 to 4 in the first index information to specify a person. For example, the user in fig. 3 selects lie four corresponding to the number 2 as the determined character.
FIG. 4 is a second schematic diagram illustrating the selection of the first index information according to the present invention. As shown in fig. 4, the user may select the names zhang san, li xi, wang wu, and zhao xi in the first index information to specify the character. For example, the user in FIG. 4 selects Liqu as the determined person.
FIG. 5 is a third exemplary illustration of selecting the first index information according to the present invention. As shown in fig. 5, the user may select an image of a person in the first index information to determine the person. It is to be noted that the image shown in fig. 5 is only for the specific understanding and explanation of the present embodiment, and is used to represent or represent a real face image of a corresponding person.
The second index information is a time node identifier, which may be a start time point and an end time point, indicating a time segment in which the corresponding person speaks. The time length of the corresponding time segment may be calculated based on the start time point and the end time point. The same first index information may have a plurality of second index information indicating that the same person corresponding to the first index information has performed a plurality of speeches in one training or conference. For example, assuming that the entire conference is calculated in 32 minutes, the speaking sequences are respectively lie four (0:00-2:00) - -wang five (2:00-5:00) -, -zhang three (5:00-8:00) -, -zhao six (8:00-12:00) -, -zhang three (12:00-15:00) -, -lie four (15:00-18:00) -, -zhao six (18:00-25:00) -, -wang five (25:00-30:00) -, -zhang three (30:00-32: 00). Then the following index relationship can be established:
1. first index information: three and three images and special tone are 1;
second index information (including three time node identifications):
index one (5:00-8: 00);
index two (12:00-15: 00);
index three (30:00-32: 00);
2. first index information: lie four & lie four portrait & idiosyncrasy & 2;
second index information (including two time node identifications):
indexes one (0:00-2: 00);
index two (15:00-18: 00).
3. First index information: king five and king five figures and special timbre & 3;
second index information (including two time node identifications):
indexes one (2:00-5: 00);
index two (25:00-30: 00).
4. First index information: zhao liu & Zhao liu portrait & speciality timbre &4
Second index information (including two time node identifications):
indexes one (8:00-12: 00);
index two (18:00-25: 00).
After selecting the corresponding person according to the first index information, the user may further select the person speaking segment under the first index information. For example, after the user selects zhangsan according to the first index information, index two (12:00-15:00) can be selected according to the second index information, that is, zhangsan is selected for speaking in 12 th to 15 th minutes.
In an implementation manner of this embodiment, the recorded segments include audio segments and video segments. When the index information corresponding to the recording segment is generated according to the character identification information and the time node identification, third index information containing an audio index item and a video index item corresponding to the audio segment and the video segment respectively can be generated under the third index information.
Fig. 6 is a diagram illustrating selection of third index information according to the present invention. As shown in fig. 6, the user may select to play an audio or video clip. For example, when the user selects lie four in the first index information, a time slice of lie four in the second index information at index two (15:00-18:00) may be selected, and then an audio or video slice of lie four at index two (15:00-18:00) may be further selected. Therefore, the user can accurately and quickly determine the character and the audio or video segment of the character at a certain time segment, and the searching efficiency of the user on the recorded file is improved.
In this embodiment, the second index information is under the first index information, and the third index information is under the second index information, through such a directory hierarchical relationship, a user may first determine a character from the first index information, then select a recorded segment of the character in a certain time period from the second index information, and then determine an audio segment or a video segment from the third index information.
In an embodiment of the present invention, when acquiring a recording segment associated with character identification information, an audio segment associated with the character identification information may be acquired, where the audio segment carries time node identifiers indicating a start time point and an end time point; synthesizing the audio clip and the image clip synchronously recorded with the audio clip into a video clip; finally, a recorded segment containing the audio segment and the video segment is generated.
The audio clip includes a time node identifier, and the synthesized video clip also includes a time node identifier. Through the time node identification of the video clip, the video clip of the corresponding person speaking in a certain time period can be selected.
Wherein, the image segment is recorded image information. In the present embodiment, the attribute information (including the number of frames, etc.) of the image information matches the attribute information (including the frequency) of the audio in the audio clip. When the audio segment and the synchronously recorded image segment are synthesized into the video segment, the recording equipment can completely coincide the audio information in the audio segment with the image information in the image segment according to the actions of the characters in the image segment, including gestures, oral gestures and the like, so that the accuracy and the efficiency of synthesizing the audio segment and the image segment are improved.
Typically, the storage space occupied by a video clip is much larger than the storage space occupied by an audio clip. Therefore, when the recording file is generated according to the recording segment and the index information, the number of the recording files can be determined according to the size of the video segment in the recording segment. For example, each video segment may be regarded as a recording file, or one or more consecutive video segments may be regarded as a recording file. The audio clip may be provided as a single recording file. It should be noted that each recording file stores the index information matched with the recording file, so that the recording file can conveniently store the audio segments and the video segments.
Fig. 7 is a flowchart of a playing method of a recording file according to an embodiment of the present invention. As shown in fig. 7, the method includes:
step 710, selecting a recording segment in the recording file according to the character identification information and the time node identification in the index information;
and step 720, playing the recorded segment.
In the embodiment of the invention, in the recording process of a conference, training and the like, the recording segment related to the character identification information can be obtained, the recording segment carries the time node identification representing the starting time and the ending time, the index information corresponding to the recording segment can be generated according to the character identification information and the time node identification, and the recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
Optionally, as an embodiment, the index information includes first index information generated according to the person identification information and second index information generated according to the time node identification;
wherein step 710 comprises:
selecting a first index item in the first index information;
and selecting a second index item in the second index information according to the first index item.
The first index information includes at least one of a name, an image, a tone, and a serial number in the personal identification information. The first index item represents at least one of a name, an image, a tone, and a serial number.
The second index information includes time node identification, i.e., a start time point and an end time point of the at least one recording segment. The second index entry is a time node identification.
Optionally, as an embodiment, the recording segment further includes an audio segment and a video segment, the index information further includes third index information, and the third index information includes an audio index item and a video index item respectively corresponding to the audio segment and the video segment;
step 710 further includes:
and selecting the audio index item or/and the video index item in the third index information according to the second index item.
Optionally, as an embodiment, step 720 includes:
and playing the audio segments or/and the video segments under the selected audio index items or/and video index items.
All the specific contents of the index information, the first index information, the second index information, the third index information, and the like may refer to specific embodiments described in fig. 1 to fig. 6, and this embodiment is not described in detail again.
Fig. 8 is a block diagram of a recording apparatus according to an embodiment of the present invention. As shown in fig. 8, the apparatus 800 includes:
an obtaining module 820, configured to obtain a recording segment associated with the person identification information, where the recording segment carries a time node identifier indicating a start time point and an end time point;
an index module 840, configured to generate index information corresponding to the recording segment according to the person identification information and the time node identification;
and a file module 860, configured to generate a recording file according to the recording segment and the index information.
In the embodiment of the invention, in the recording process of a conference, training and the like, a recording segment related to the character identification information can be obtained, the recording segment carries time node identifications representing a starting time point and an ending time point, index information corresponding to the recording segment can be generated according to the character identification information and the time node identifications, and a recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
The recording apparatus 800 may be various electronic devices such as a terminal device having the above recording function.
Optionally, as an embodiment, the apparatus 800 further includes:
and the determining module is used for determining the person identification information according to the received sound information of the person.
Optionally, as an embodiment, the obtaining module 820 includes:
the recording unit is used for recording the time node identifications of the recording segments at the starting time point and the ending time point according to the person identification information;
and the segment unit is used for acquiring the recording segment according to the time node identification.
Optionally, as an embodiment, the indexing module 840 includes:
the first indexing unit is used for generating first indexing information of the recording segment according to at least one of names, images, timbres and serial numbers in the character identification information;
and the second index unit is used for generating second index information of the recording segment according to the time node identification.
Optionally, as an embodiment, the recording segment includes an audio segment and a video segment;
the indexing module 840 further comprises:
and a third indexing unit, configured to generate third indexing information, where the third indexing information includes an audio indexing item and a video indexing item that respectively correspond to the audio clip and the video clip.
Optionally, as an embodiment, the obtaining module 820 includes:
the audio unit is used for acquiring an audio clip associated with the character identification information, wherein the audio clip carries time node identifications representing a starting time point and an ending time point;
the synthesizing unit is used for synthesizing the audio clip and the image clip synchronously recorded with the audio clip into a video clip;
and the generating unit is used for generating a recording segment containing the audio segment and the video segment.
Fig. 9 is a block diagram of a playback apparatus for recording a file according to an embodiment of the present invention. As shown in fig. 9, the apparatus 900 includes:
a selecting module 910, configured to select a recording segment in the recording file according to the person identification information and the time node identification in the index information;
the playing module 920 is configured to play the recorded segment.
In the embodiment of the invention, in the recording process of a conference, training and the like, a recording segment related to the character identification information can be obtained, the recording segment carries time node identifications representing a starting time point and an ending time point, index information corresponding to the recording segment can be generated according to the character identification information and the time node identifications, and a recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
Optionally, as an embodiment, the index information includes first index information generated according to the personal identification information and second index information generated according to the time node identification;
the selection module 910 includes:
a first selecting unit, configured to select a first index item in the first index information;
and the second selection unit is used for selecting a second index item in the second index information according to the first index item.
Optionally, as an embodiment, the recording segment includes an audio segment and a video segment, the index information further includes third index information, and the third index information includes an audio index item and a video index item respectively corresponding to the audio segment and the video segment;
the selection module 910 further comprises:
and the third selecting unit is used for selecting the audio index item or/and the video index item in the third index information according to the second index item.
Optionally, as an embodiment, the playing module 920 is configured to:
and playing the audio segments or/and video segments under the selected audio index items or/and video index items.
Fig. 10 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and a power supply 1011. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 10 is not intended to be limiting, and that terminal devices may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 1010 is configured to:
acquiring a recording segment associated with the figure identification information, wherein the recording segment carries a time node identification representing a starting time point and an ending time point;
generating index information corresponding to the recording segment according to the figure identification information and the time node identification;
and generating a recording file according to the recording fragment and the index information.
Wherein, the processor 1010 is further configured to:
selecting a recording segment in the recording file according to the figure identification information and the time node identification in the index information;
and playing the recorded segment.
In the embodiment of the invention, in the recording process of a conference, training and the like, a recording segment related to the character identification information can be obtained, the recording segment carries time node identifications representing a starting time point and an ending time point, index information corresponding to the recording segment can be generated according to the character identification information and the time node identifications, and a recording file can be generated according to the recording segment and the index information. Therefore, when searching for the recording segment in the recording file, a user can directly search for the recording segment of a specific figure to be played according to the figure identification information in the index information, and can directly select the recording segment of a certain time period or a certain time period of the figure to be played according to the time node identification in the index information, so that the searching efficiency of the recording file is improved, and the user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1001 may be used for receiving and sending signals during a message transmission or a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1010; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 1001 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 1002, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 1003 may convert audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as sound. Also, the audio output unit 1003 can also provide audio output related to a specific function performed by the terminal apparatus 1000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1004 is used to receive an audio or video signal. The input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, the Graphics processor 10041 Processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1006. The image frames processed by the graphic processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002. The microphone 10042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1001 in case of a phone call mode.
Terminal device 1000 can also include at least one sensor 1005, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 10061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 10061 and/or backlight when the terminal device 1000 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 1006 is used to display information input by the user or information provided to the user. The Display unit 1006 may include a Display panel 1061, and the Display panel 10061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 10071 (e.g., operations by a user on or near the touch panel 10071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 10071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1010, and receives and executes commands sent by the processor 1010. In addition, the touch panel 10071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 10071, the user input unit 1007 can include other input devices 10072. Specifically, the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 10071 can be overlaid on the display panel 10061, and when the touch panel 10071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 10061 according to the type of the touch event. Although in fig. 10, the touch panel 10071 and the display panel 10061 are two independent components for implementing the input and output functions of the terminal device, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output functions of the terminal device, and the implementation is not limited herein.
The interface unit 1008 is an interface for connecting an external device to the terminal apparatus 1000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 1008 can be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within terminal apparatus 1000 or can be used to transmit data between terminal apparatus 1000 and external devices.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1009 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1010 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 1009 and calling data stored in the memory 1009, thereby performing overall monitoring of the terminal device. Processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
Terminal device 1000 can also include a power source 1011 (e.g., a battery) for powering the various components, and preferably, power source 1011 can be logically coupled to processor 1010 through a power management system that provides management of charging, discharging, and power consumption.
In addition, the terminal device 1000 includes some functional modules that are not shown, and are not described herein again.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 1010, a memory 1009, and a computer program stored in the memory 1009 and capable of running on the processor 1010, where the computer program is executed by the processor 1010 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A method of recording, the method comprising:
acquiring a recording segment associated with the figure identification information, wherein the recording segment carries a time node identification representing a starting time point and an ending time point;
generating index information corresponding to the recording segment according to the figure identification information and the time node identification;
generating a recording file according to the recording fragment and the index information;
wherein, the generating of the index information corresponding to the recording segment according to the person identification information and the time node identification includes:
generating first index information of the recording segment according to at least one of name information, image information, tone information and sequence number information in the person identification information;
generating second index information of the recording segment according to the time node identification;
the first index information and the second index information have a certain directory hierarchical structure relationship, and the second index information determines a person according to the first index information under the first index information through the directory hierarchical structure relationship, and then selects the recording segment of the person in a certain time period according to the second index information.
2. The method of claim 1, wherein prior to obtaining the recorded segment associated with the personal identification information, the method further comprises:
the personal identification information is determined based on the received personal sound information.
3. The method of claim 1, wherein obtaining the recorded segment associated with the personal identification information comprises:
recording time node identifications of the recorded segments at a starting time point and an ending time point according to the character identification information;
and acquiring the recording segment according to the time node identification.
4. The method of claim 1, wherein the recorded segments comprise audio segments and video segments;
generating index information corresponding to the recording segment according to the person identification information and the time node identification, and further comprising:
and generating third index information, wherein the third index information comprises an audio index item and a video index item which respectively correspond to the audio segment and the video segment.
5. The method of claim 1, wherein obtaining the recorded segment associated with the personal identification information comprises:
acquiring an audio clip associated with the character identification information, wherein the audio clip carries time node identifications representing a starting time point and an ending time point;
synthesizing the audio clip and the image clip synchronously recorded with the audio clip into a video clip;
and generating a recording segment containing the audio segment and the video segment.
6. A recording apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a recording segment associated with the figure identification information, wherein the recording segment carries time node identifications representing a starting time point and an ending time point;
the index module is used for generating index information corresponding to the recording segment according to the figure identification information and the time node identification;
the file module is used for generating a recording file according to the recording fragment and the index information;
the indexing module comprises:
the first indexing unit is used for generating first indexing information of the recording segment according to at least one of name information, image information, tone information and sequence number information in the character identification information;
the second index unit is used for generating second index information of the recording segment according to the time node identification;
the first index information and the second index information have a certain directory hierarchical structure relationship, and the second index information determines a person according to the first index information and selects the recording segment of the person in a certain time period according to the second index information under the first index information through the directory hierarchical structure relationship.
7. The apparatus of claim 6, further comprising:
and the determining module is used for determining the person identification information according to the received person sound information.
8. The apparatus of claim 6, wherein the obtaining module comprises:
the recording unit is used for recording the time node identifications of the recording segments at the starting time point and the ending time point according to the person identification information;
and the segment unit is used for acquiring the recording segment according to the time node identification.
9. The apparatus of claim 6, wherein the recorded segments comprise audio segments and video segments;
the indexing module further comprises:
and a third indexing unit, configured to generate third indexing information, where the third indexing information includes an audio indexing item and a video indexing item that respectively correspond to the audio clip and the video clip.
10. The apparatus of claim 6, wherein the obtaining module comprises:
the audio unit is used for acquiring an audio clip associated with the character identification information, wherein the audio clip carries time node identifications representing a starting time point and an ending time point;
the synthesizing unit is used for synthesizing the audio clip and the image clip synchronously recorded with the audio clip into a video clip;
and the generating unit is used for generating a recording segment containing the audio segment and the video segment.
11. A terminal device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201810532492.7A 2018-05-29 2018-05-29 Recording method, recording device and terminal equipment Active CN108763475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810532492.7A CN108763475B (en) 2018-05-29 2018-05-29 Recording method, recording device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810532492.7A CN108763475B (en) 2018-05-29 2018-05-29 Recording method, recording device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108763475A CN108763475A (en) 2018-11-06
CN108763475B true CN108763475B (en) 2021-01-15

Family

ID=64003708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810532492.7A Active CN108763475B (en) 2018-05-29 2018-05-29 Recording method, recording device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108763475B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109474849B (en) * 2018-11-12 2019-11-26 广东乐心医疗电子股份有限公司 Multimedia data processing method, system, terminal and computer readable storage medium
CN112351290A (en) * 2020-09-08 2021-02-09 深圳Tcl新技术有限公司 Video recording method, device and equipment of intelligent equipment and readable storage medium
CN112653896B (en) * 2020-11-24 2023-06-13 贝壳技术有限公司 House source information playback method and device with viewing assistant, electronic equipment and medium
CN115567670A (en) * 2021-07-02 2023-01-03 信骅科技股份有限公司 Conference viewing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703694A (en) * 2001-12-11 2005-11-30 皇家飞利浦电子股份有限公司 System and method for retrieving information related to persons in video programs
CN106024009A (en) * 2016-04-29 2016-10-12 北京小米移动软件有限公司 Audio processing method and device
CN106791549A (en) * 2016-11-21 2017-05-31 建荣半导体(深圳)有限公司 A kind of videotape storage means, system and drive recorder
CN107360387A (en) * 2017-07-13 2017-11-17 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of video record
US9883221B1 (en) * 2015-03-25 2018-01-30 Concurrent Computer Corporation System and method for optimizing real-time video-on-demand recording in a content delivery network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653729B (en) * 2016-01-28 2019-10-08 努比亚技术有限公司 A kind of device and method of recording file index

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703694A (en) * 2001-12-11 2005-11-30 皇家飞利浦电子股份有限公司 System and method for retrieving information related to persons in video programs
US9883221B1 (en) * 2015-03-25 2018-01-30 Concurrent Computer Corporation System and method for optimizing real-time video-on-demand recording in a content delivery network
CN106024009A (en) * 2016-04-29 2016-10-12 北京小米移动软件有限公司 Audio processing method and device
CN106791549A (en) * 2016-11-21 2017-05-31 建荣半导体(深圳)有限公司 A kind of videotape storage means, system and drive recorder
CN107360387A (en) * 2017-07-13 2017-11-17 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of video record

Also Published As

Publication number Publication date
CN108763475A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN110740259B (en) Video processing method and electronic equipment
CN108763475B (en) Recording method, recording device and terminal equipment
CN111314784B (en) Video playing method and electronic equipment
CN110557683B (en) Video playing control method and electronic equipment
WO2021012900A1 (en) Vibration control method and apparatus, mobile terminal, and computer-readable storage medium
CN108616448B (en) Information sharing path recommendation method and mobile terminal
CN110830362B (en) Content generation method and mobile terminal
CN109284081B (en) Audio output method and device and audio equipment
CN107886969B (en) Audio playing method and audio playing device
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN109495638B (en) Information display method and terminal
CN109257498B (en) Sound processing method and mobile terminal
CN108074574A (en) Audio-frequency processing method, device and mobile terminal
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN110808019A (en) Song generation method and electronic equipment
CN110855921A (en) Video recording control method and electronic equipment
CN111752448A (en) Information display method and device and electronic equipment
CN110750198A (en) Expression sending method and mobile terminal
CN111143614A (en) Video display method and electronic equipment
CN108632465A (en) A kind of method and mobile terminal of voice input
CN109922199B (en) Contact information processing method and terminal
CN109325219B (en) Method, device and system for generating record document
CN110880330A (en) Audio conversion method and terminal equipment
CN110913256A (en) Video searching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant