CN117676056A - File display method and device, electronic equipment and readable storage medium - Google Patents

File display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117676056A
CN117676056A CN202311760619.8A CN202311760619A CN117676056A CN 117676056 A CN117676056 A CN 117676056A CN 202311760619 A CN202311760619 A CN 202311760619A CN 117676056 A CN117676056 A CN 117676056A
Authority
CN
China
Prior art keywords
audio
input
video
folder
scene information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311760619.8A
Other languages
Chinese (zh)
Inventor
李俊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311760619.8A priority Critical patent/CN117676056A/en
Publication of CN117676056A publication Critical patent/CN117676056A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a file display method, a file display device, electronic equipment and a readable storage medium, and belongs to the field of information display. Wherein the method comprises the following steps: displaying at least one folder identification, wherein the folder identification is associated with scene information; receiving a first input of a first folder identification in the at least one folder identification, the first folder identification being associated with the first scene information; in response to the first input, at least one first audio-video file is displayed, key video frames in the first audio-video file being associated with first scene information.

Description

File display method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the field of information display, and particularly relates to a file display method, a device, electronic equipment and a readable storage medium.
Background
At present, with the rising of short video technology and the continuous development of the image capturing function of electronic devices, a large number of videos are generally stored in the electronic devices, and if a user wants to distinguish which video is the video that he wants to view from the large number of videos, the user needs to open the videos one by one to view.
Therefore, the current user has low browsing efficiency on audio and video files.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a readable storage medium for displaying files, which can solve the problem that the current user has low browsing efficiency on audio and video files.
In a first aspect, an embodiment of the present application provides a method for displaying a file, where the method includes:
displaying at least one folder identification, wherein the folder identification is associated with scene information;
receiving a first input of a first folder identification in at least one folder identification, wherein the first folder identification is associated with first scene information;
and in response to the first input, displaying at least one first audio-video file, wherein key video frames in the first audio-video file are associated with the first scene information.
In a second aspect, an embodiment of the present application provides a document display apparatus, including:
the display module is used for displaying at least one folder identifier, and the folder identifier is associated with the scene information;
the receiving module is used for receiving a first input of a first folder identifier in at least one folder identifier, and the first folder identifier is associated with first scene information;
the display module is further configured to display at least one first audio-video file in response to the first input, where a key video frame in the first audio-video file is associated with the first scene information.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, at least one folder identification for indicating the scene information is displayed, so that a user can quickly and intuitively know the scene information indicated by each folder identification, and the user can select the scene information from the folder identification; receiving a first input of a first folder identifier in at least one folder identifier, wherein the first folder identifier is associated with first scene information and indicates that a user wants to view an audio/video file related to the first scene information indicated by the first folder identifier; and in response to the first input, displaying at least one first audio-video file, wherein the key video frames in the first audio-video file are associated with first scene information, namely, the key contents in the first audio-video file are associated with the first scene information, so that the first audio-video file which is required to be checked by a user and is associated with the first scene information can be quickly displayed to the user, thereby enabling the user to efficiently check the first audio-video file and improving file browsing efficiency.
Drawings
FIG. 1 is a flowchart of a method for displaying files according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a folder identifier according to an embodiment of the present application;
fig. 3 is an interface schematic diagram of identifying scene information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interface of a first video according to an embodiment of the present application;
FIG. 5 is an interface schematic diagram of a scene identifier according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for identifying scene information according to an embodiment of the present disclosure;
fig. 7 is an interface schematic diagram for identifying scene information according to an embodiment of the present application;
FIG. 8 is a block diagram of a document display apparatus according to an embodiment of the present application;
fig. 9 is one of the hardware configuration diagrams of the electronic device according to the embodiment of the present application;
fig. 10 is a second schematic diagram of a hardware structure of the electronic device according to the embodiment of the present application.
Detailed Description
Technical solutions of embodiments of the present application will be clearly described below with reference to the accompanying drawings of embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The embodiment of the application provides a file display method, a file display device, electronic equipment and a readable storage medium, which can solve the problem that the current user has low browsing efficiency on audio and video files.
The file display method provided by the embodiment of the application is described below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of a file display method according to an embodiment of the present application.
As shown in fig. 1, the file display method may include steps 110 to 130, and the method is applied to a file display device, specifically as follows:
step 110, displaying at least one folder identifier, wherein the folder identifier is used for indicating scene information;
wherein the folder identification is used for indicating scene information of the video;
illustratively, the scene information may include: "building" scene information, "scenery" scene information, "crowd" scene information, "pet" scene information, and "flower" scene information, etc. Accordingly, the folder identification may include: "building", "landscape", "crowd", "pet" and "flowers", etc.
Prior to step 110, the steps of:
analyzing the audio and video file, and dividing the audio and video file into at least one video clip according to the scene information; one scene information is associated with at least one video clip;
and moving the video clip corresponding to each scene information into the folder associated with the scene information.
The video clips stored in the folders associated with the first scene information are first audio and video files associated with the first scene information.
Optionally, after placing a video in a folder of a scene, opening the video in the folder, and playing only a clip containing the scene. It may be that when the video is clicked to play, a clip including the scene in the video is determined according to the scene.
Specifically, the scene information includes: the first scene information and the second scene information can be used for determining at least one first audio-video file from the audio-video files according to the first scene information; moving at least one first audio and video file into a first folder indicated by a first folder identification;
determining at least one second audio-video file from the audio-video files according to the second scene information; and moving at least one second audio and video file into the first folder indicated by the second folder identification.
The method can analyze the videos when the electronic equipment is in an idle state and the electric quantity is larger than the preset electric quantity, and divide the audio and video files into a plurality of categories.
After the audio and video files corresponding to each category are moved into the folders associated with the category, a plurality of folders exist in the album application, and one folder corresponds to one scene information.
As shown in fig. 2, at least one folder identifier 200 is displayed, the folder identifier being used to indicate scene information, the at least one folder identifier including a first folder identifier.
In one possible embodiment, prior to step 110, the method further comprises:
receiving a fourth input to the first image;
Determining, in response to the fourth input, first scene information associated with the first image;
determining at least one first audio-video file from the plurality of audio-video files according to the first scene information;
and moving at least one first audio and video file into a first folder indicated by the first folder identification.
In response to a fourth input to the first image, first scene information associated with the first image is determined, the first scene information being determined by the first image.
Wherein, in response to the fourth input, the step of determining the first scene information associated with the first image may specifically include: identifying key image elements in the first image; first scene information associated with the key image element is determined.
The sources of the audio and video files may include: camera shooting, network downloading and external importing.
In addition, for any first audio/video file, the duration of the key video frame associated with the first scene information is longer than a preset duration, for example, the preset duration may be 3 seconds, 4 seconds or 5 seconds. Accordingly, the duration of the key audio frame associated with the first scene information may also be greater than the preset duration.
As shown in fig. 3, input to import picture control 310 is received, at least one preview image is displayed, a fourth input to a first image of the at least one preview image is received, and the first image is uploaded to determine first scene information associated with the first image.
Illustratively, the first image includes a conference room, and a table in the conference room is a main body element, and then with the first scene information, an audio-video file of the table or the conference room scene is matched.
Illustratively, the first image includes a sky therein, and thus, the first audiovisual files each include an image element of the sky therein.
Therefore, according to the first scene information associated with the first image, at least one first audio-video file is determined from the plurality of audio-video files, the first audio-video file associated with the first scene information associated with the first image can be quickly determined, and the at least one first audio-video file is moved into a first folder indicated by the first folder identification, so that quick classification of the audio-video files is realized.
Step 120, receiving a first input of a first folder identifier in the at least one folder identifier, wherein the first folder identifier is used for indicating first scene information;
illustratively, the first folder is identified as "building", the first folder identification is used for indicating the first scene information "building scene", and the folder indicated by the first folder identification includes the audio-video file with the "building" element.
In addition, since a video may contain multiple different scenes, the same video may appear in multiple folders. Thus, the same video may be included in the folder indicated by the first folder identification and the folders indicated by the other folder identifications.
In response to the first input, at least one first audio-video file is displayed, the key video frames in the first audio-video file being associated with first scene information, step 130.
The sources of the first audio and video file may include: camera shooting, network downloading and external importation.
As shown in fig. 4, the folder indicated by the first folder identifier includes an audio/video file with a "building" element. In the interface shown in fig. 4, at least one first audio/video file 410 is displayed.
In addition, the thumbnail of each first audio-video file may display a video frame associated with the first scene information to facilitate quick positioning by the user.
In one possible embodiment, after step 130, the method further comprises:
receiving a second input to a second audio-video file of the at least one first audio-video file;
in response to the second input, a second audio-video file is played, the progress bar of the second audio-video file including a first scene identification indicating a video clip associated with the first scene information.
And when the user clicks to view the video, only the fragments containing the scene elements are played by default, and the fragments not contained are automatically skipped, so that the user can conveniently and quickly locate the required content. And meanwhile, the user can export and share the video containing the scene element to the social account by one key.
In response to the second input, the video clips associated with the at least one scene information in the second audio-video file may be automatically played, and the user may manually play all the clips.
The progress bar specifically may include: at least one first scene identification and at least one second scene identification; the first scene identification is used for indicating a video clip associated with the first scene information; the second scene identification is used to indicate a video clip associated with the second scene information.
For example, if the first scene information is "food", each first scene identifier included in the progress bar is used to indicate a video clip associated with "food".
Illustratively, the second audio-video file captures a travel record video of a day for a video blogger, and the video content in the second audio-video file includes: the method has the advantages that a city is visited, a landmark building is visited, local delicious food is punched, a store owner is interviewed, a section of monologue is self-described, and the like.
In response to a second input, playing a second audio-video file, wherein a progress bar of the second audio-video file comprises at least one scene identifier, and the scene identifiers are respectively used for indicating video clips associated with landmark buildings; the video clips are associated with local food, store owners and monologues.
As shown in fig. 5, the progress bar of the second audio-video file includes a plurality of scene identifications 510 for indicating video clips associated with at least one scene information.
Therefore, the user can visually and clearly check which time periods contain the required video materials by entering the first folder indicated by the first folder identification and opening the first audio/video file, and the progress bar of the second audio/video file can clearly see at least one scene identification, so that the video browsing efficiency is greatly improved.
In one possible embodiment, after step 130, the method further comprises:
receiving a third input to a third audio-video file of the at least one first audio-video file;
playing the first video clip in response to the third input; or, storing the first video clip;
the first video clip is a video clip associated with the first scene information in the third audio/video file.
In response to a third input to a third one of the at least one first audiovisual file, video clips in the third audiovisual file associated with the first scene information are identified, and only the first video clips are played or only portions containing scene elements are saved.
The user can also select to export, and save the video clip associated with the first scene information in the third audio/video file as a new file.
The step of storing the first video segment in response to the third input may specifically include:
responding to the third input, and acquiring a first video clip from the first audio/video file according to the first scene information; the first video segment is saved.
Here, by acquiring the first video clip from the first audio/video file according to the first scene information, the user can be helped to quickly extract the first video clip associated with the first scene information from the first audio/video file based on the storage requirement of the user, and because the data size of the first video clip is smaller than that of the first audio/video file, the occupied storage space can be reduced by storing the first video clip, and the sharing of the first video clip is facilitated.
In addition, the user can manually drag the progress bar to change the first video clip, so as to flexibly play the first video clip or save the first video clip.
Thereby playing the first video clip in response to a third input to a third one of the at least one first audio video file; or, the first video segment is saved, so that the video segment associated with the first scene information can be browsed quickly, and the user can browse the first video segment associated with the first scene information efficiently.
In one possible embodiment, after step 130, the method further comprises:
receiving a fifth input to a fourth audio-video file of the at least one audio-video file;
responsive to the fifth input, dividing the fourth audiovisual file into at least one video segment;
and respectively moving at least one video clip into the folders corresponding to the video clips.
The fifth input is used for dividing the fourth audio/video file into at least one video clip according to the scene information.
Illustratively, the at least one video clip comprises: video clip 1 and video clip 2, video clip 1 and video clip 2 may be associated with first scene information and second scene information, respectively; that is, the video scene in the video clip 1 is the first scene information, and the video scene in the video clip 2 is the first scene information.
For example, if the first scene information is "food", then video clip 1 is a video clip associated with "food". The first scene information is "park", and video clip 1 is a video clip associated with "park".
Video clip 1 may be moved into a folder corresponding to the first scene information and video clip 2 may be moved into a folder corresponding to the second scene information.
Thus, the fourth audio-video file is divided into at least one video clip; and moving at least one video clip into the folder corresponding to the video clip, and sorting the video clips so as to facilitate the user to view the video clip through the folder.
In one possible embodiment, at least one folder identification comprises: at least one audio folder identification; the method can also comprise the following steps:
receiving a sixth input of a second folder identification of the at least one audio folder identification;
in response to a sixth input, at least one first audio file is displayed, key audio frames in the first audio file being associated with second scene information, the second folder identification being used to indicate the second scene information.
The audio folder identifier specifically may include: "meeting", "interview", "monologue", "street downtown", and "quiet environment", etc.
Illustratively, the user wants to listen to a first audio file associated with the second scene information "meeting". Receiving a sixth input of a second folder identifier of the at least one audio folder identifier for indicating the second scene information, displaying the at least one first audio file in response to the sixth input, and rapidly displaying the at least one first audio file associated with the second scene information "meeting".
Therefore, in response to the sixth input of the second folder identifier for indicating the second scene information in the at least one audio folder identifier, at least one first audio file is displayed, the key audio frames in the first audio file are associated with the second scene information, a user can conveniently and quickly look up the first audio file associated with the second scene information, the user does not need to open the audio files one by one to listen to the first audio file, and the audio file to be listened to can be quickly found, so that the browsing efficiency of the user on the audio file can be improved.
In a possible embodiment, before the step of receiving the sixth input to the second folder identification of the at least one audio folder identification, the steps of:
analyzing the plurality of audio files, dividing the plurality of audio files into a plurality of categories, and associating one category with one scene information; and moving the audio files corresponding to each category into the folder associated with the category.
Wherein the plurality of categories may include: "meeting", "interview", "monologue", "street downtown", "quiet environment", etc.
In a possible embodiment, before the step of receiving the sixth input to the second folder identifier in the at least one audio folder identifier, the step of:
Receiving a seventh input of the first audio information;
determining, in response to the seventh input, second scene information associated with the first audio information;
determining at least one first audio file from the plurality of audio files according to the second scene information;
at least one first audio file is moved into a second folder indicated by a second folder identification.
To improve the accuracy of the audio folder division, audio information of a specific tone may be saved to identify video information including the specific tone.
Illustratively, the first audio information is audio information of a lecture of a teacher, and thus, the second scene information associated with the first audio information may be determined, specifically: scene of teacher lecture. And determining at least one first audio file from the plurality of audio files according to the second scene information, wherein the scenes of the at least one first audio file are the scenes of the lectures of the A teacher. At least one first audio file is moved into a second folder indicated by a second folder identification "a teacher lecture".
Thus, the second scene information associated with the first audio information is determined, and at least one first audio file is determined from the plurality of audio files according to the second scene information, so that a user can quickly view the first audio file associated with the second scene information.
In a possible embodiment, the step of receiving the seventh input of the first audio information may specifically include the steps of:
receiving a seventh input of the first audio information and audio characteristics, the audio characteristics including at least one of: sound source type, volume level, and scene type;
in response to the seventh input, the step of determining the second scene information associated with the first audio information may specifically include the steps of:
in response to the seventh input, second scene information is determined from the first audio information and the audio features.
The first audio information may be audio information in an audio file, or may be audio information in an audio/video file.
The sound source type may specifically include: human voice, musical voice or natural sound;
the volume level may specifically include: a high decibel level, a low noise level, and a high frequency duty cycle level;
alternatively, the volume level may specifically include: a first volume level of 20 db-30 db; a second volume level of 30 db-40 db; a third volume level of 40 db-50 db;
the scene types may specifically include: conference scenes, outdoor scenes, and music scenes.
In response to a seventh input of the first audio information and the audio features, the electronic device performs overall analysis on the first audio information imported by the user, finds out feature information therein, and presents the feature information for screening by the user, including sound rays, intelligent scenes, sound source types, volume levels and the like.
The user can select the audio features in the setting interface by himself, and a scene containing the selected audio features is obtained after the setting is completed, and is used for determining second scene information and matching the audio files associated with the second scene information.
Illustratively, as shown in FIG. 6, the seventh input comprises: the method comprises the steps of selecting and inputting sound ray information of a first audio message of a colleague A, selecting and inputting a scene type conference in an audio feature, selecting and inputting a sound source scene type human voice in the audio feature and selecting and inputting a volume level high decibel in the audio feature, obtaining second scene information of a noisy sound participated by the colleague A and discussing a fierce conference, and determining the second scene information according to the first audio message and the audio feature.
In addition, the self-defining mode of the audio classification can be divided into whole segment identification and segment interception.
As shown in fig. 7, when using the clip-on method, the user may select a piece of audio information, such as a piece of mobile phone bell, a sentence, a car whistle, etc., on the time axis to determine the scene information associated therewith.
Thus, the second scene information can be accurately determined from the first audio information and the audio features, so that at least one first audio file can be accurately determined from the plurality of audio files from the second scene information later.
In the embodiment of the application, at least one folder identification for indicating the scene information is displayed, so that a user can quickly and intuitively know the scene information indicated by each folder identification, and the user can select the scene information from the folder identification; receiving a first input of a first folder identifier in at least one folder identifier, wherein the first folder identifier is associated with first scene information and indicates that a user wants to view an audio/video file related to the first scene information indicated by the first folder identifier; and in response to the first input, displaying at least one first audio-video file, wherein the key video frames in the first audio-video file are associated with first scene information, namely, the key contents in the first audio-video file are associated with the first scene information, so that the first audio-video file which is required to be checked by a user and is associated with the first scene information can be quickly displayed to the user, thereby enabling the user to efficiently check the first audio-video file and improving file browsing efficiency.
According to the file display method provided by the embodiment of the application, the execution subject can be a file display device. In the embodiment of the present application, a method for executing a file display by using a file display device is taken as an example, and the file display device provided in the embodiment of the present application is described.
Fig. 8 is a block diagram of a document display apparatus according to an embodiment of the present application, where the apparatus 800 includes:
a display module 810 for displaying at least one folder identification, the folder identification being associated with the scene information;
a receiving module 820 for receiving a first input of a first folder identification of at least one of the folder identifications, the first folder identification being associated with first scene information;
the display module 810 is further configured to display at least one first audio-video file in response to the first input, where a key video frame in the first audio-video file is associated with the first scene information.
In a possible embodiment, the receiving module 820 is further configured to receive a second input to a second audio-video file of at least one of the first audio-video files;
the apparatus 800 may further include:
and the playing module is used for responding to the second input and playing the second audio-video file, and the progress bar of the second audio-video file comprises a first scene identifier, wherein the first scene identifier is used for indicating a video clip associated with the first scene information.
In a possible embodiment, the receiving module 820 is further configured to receive a third input to a third audio-video file of at least one of the first audio-video files;
the apparatus 800 may further include:
a playing module, configured to respond to the third input, and play the first video clip;
the storage module is used for storing the first video clip;
the first video clip is a video clip associated with the first scene information in the third audio/video file.
In one possible embodiment, the receiving module 820 is further configured to receive a fourth input to the first image;
the apparatus 800 may further include:
a determining module for determining the first scene information associated with the first image in response to the fourth input;
the determining module is further used for determining at least one first audio-video file from a plurality of audio-video files according to the first scene information;
and the storage module is used for moving at least one first audio and video file into a first folder indicated by the first folder identification.
In a possible embodiment, the receiving module 820 is further configured to receive a fifth input to a fourth audio-video file of the at least one audio-video file;
The apparatus 800 may further include:
the dividing module is used for responding to the fifth input and dividing the fourth audio/video file into at least one video segment;
and the storage module is also used for respectively moving at least one video clip into the folder corresponding to the video clip.
In a possible embodiment, the receiving module 820 is further configured to receive a sixth input of a second folder identification of at least one of the audio folder identifications;
the display module 810 is further configured to display at least one first audio file in response to the sixth input, wherein a key audio frame in the first audio file is associated with second scene information, and wherein the second folder identifier is used for indicating the second scene information.
In one possible embodiment, the receiving module 820 is further configured to receive a seventh input of the first audio information;
a determining module, further configured to determine, in response to the seventh input, the second scene information associated with the first audio information;
the determining module is further used for determining at least one first audio file from a plurality of audio files according to the second scene information;
and the storage module is also used for moving at least one first audio file into a second folder indicated by the second folder identification.
In one possible embodiment, the receiving module 820 is further configured to receive a seventh input of the first audio information and an audio feature, the audio feature including at least one of: sound source type, volume level, and scene type;
the determining module is specifically configured to: in response to the seventh input, the second scene information is determined from the first audio information and the audio features.
In the embodiment of the application, at least one folder identification for indicating the scene information is displayed, so that a user can quickly and intuitively know the scene information indicated by each folder identification, and the user can select the scene information from the folder identification; receiving a first input of a first folder identifier in at least one folder identifier, wherein the first folder identifier is associated with first scene information and indicates that a user wants to view an audio/video file related to the first scene information indicated by the first folder identifier; and in response to the first input, displaying at least one first audio-video file, wherein the key video frames in the first audio-video file are associated with first scene information, namely, the key contents in the first audio-video file are associated with the first scene information, so that the first audio-video file which is required to be checked by a user and is associated with the first scene information can be quickly displayed to the user, thereby enabling the user to efficiently check the first audio-video file and improving file browsing efficiency.
The document display apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The file display device in the embodiment of the present application may be a device with an action system. The action system may be an Android (Android) action system, an iOS action system, or other possible action systems, and the embodiment of the application is not specifically limited.
The file display device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 910, including a processor 911, a memory 912, and a program or an instruction stored in the memory 912 and capable of being executed on the processor 911, where the program or the instruction implements the steps of any of the foregoing embodiments of the file display method when executed by the processor 911, and the same technical effects are achieved, and for avoiding repetition, a detailed description is omitted herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 10 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the display unit 1006 is configured to display at least one folder identifier, where the folder identifier is associated with scene information;
a user input unit 1007 for receiving a first input of a first folder identification of at least one of the folder identifications, the first folder identification being associated with first scene information;
the display unit 1006 is further configured to display at least one first audio-video file in response to the first input, where a key video frame in the first audio-video file is associated with the first scene information.
Optionally, the user input unit 1007 is further configured to receive a second input of a second audio/video file in at least one of the first audio/video files;
the display unit 1006 is further configured to play the second audio and video file in response to the second input, where a progress bar of the second audio and video file includes a first scene identifier, where the first scene identifier is used to indicate a video clip associated with the first scene information.
An audio output unit 1003, configured to play the second audio and video file in response to the second input, where a progress bar of the second audio and video file includes a first scene identifier, and the first scene identifier is used to indicate a video clip associated with the first scene information.
Optionally, the user input unit 1007 is further configured to receive a third input of a third audio/video file in at least one of the first audio/video files;
a display unit 1006, further configured to play the first video clip in response to the third input;
an audio output unit 1003 further configured to play the first video clip in response to the third input;
a memory 1009 for storing the first video clip;
the first video clip is a video clip associated with the first scene information in the third audio/video file.
Optionally, the user input unit 1007 is further configured to receive a fourth input of the first image;
a processor 1010 for determining the first scene information associated with the first image in response to the fourth input;
the processor 1010 is further configured to determine at least one first audio-video file from a plurality of audio-video files according to the first scene information;
the memory 1009 is further configured to move at least one of the first audio and video files into a first folder indicated by the first folder identifier.
Optionally, the user input unit 1007 is further configured to receive a fifth input of a fourth audio/video file in the at least one audio/video file;
The processor 1010 is further configured to divide the fourth audio-video file into at least one video clip in response to the fifth input;
the memory 1009 is further configured to move at least one video clip into a folder corresponding to the video clip, respectively.
Optionally, the user input unit 1007 is further configured to receive a sixth input of a second folder identification of the at least one audio folder identifications;
the display unit 1006 is further configured to display at least one first audio file in response to the sixth input, wherein the key audio frames in the first audio file are associated with second scene information, and the second folder identification is used for indicating the second scene information.
Optionally, the user input unit 1007 is further configured to receive a seventh input of the first audio information;
a processor 1010, further configured to determine, in response to the seventh input, the second scene information associated with the first audio information;
a processor 1010, further configured to determine at least one of the first audio files from a plurality of audio files according to the second scene information;
the memory 1009 is further configured to move at least one of the first audio files into a second folder indicated by the second folder identifier.
Optionally, the user input unit 1007 is further configured to receive a seventh input of the first audio information and audio features, including at least one of: sound source type, volume level, and scene type;
the processor 1010 is further configured to determine the second scene information based on the first audio information and the audio feature in response to the seventh input.
In the embodiment of the application, at least one folder identification for indicating the scene information is displayed, so that a user can quickly and intuitively know the scene information indicated by each folder identification, and the user can select the scene information from the folder identification; receiving a first input of a first folder identifier in at least one folder identifier, wherein the first folder identifier is associated with first scene information and indicates that a user wants to view an audio/video file related to the first scene information indicated by the first folder identifier; and in response to the first input, displaying at least one first audio-video file, wherein the key video frames in the first audio-video file are associated with first scene information, namely, the key contents in the first audio-video file are associated with the first scene information, so that the first audio-video file which is required to be checked by a user and is associated with the first scene information can be quickly displayed to the user, thereby enabling the user to efficiently check the first audio-video file and improving file browsing efficiency.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes an action system, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the file display method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the file display method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the file display method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (11)

1. A method of displaying a document, the method comprising:
displaying at least one folder identification, wherein the folder identification is associated with scene information;
receiving a first input of a first folder identification in at least one folder identification, wherein the first folder identification is associated with first scene information;
and in response to the first input, displaying at least one first audio-video file, wherein key video frames in the first audio-video file are associated with the first scene information.
2. The method of claim 1, wherein after displaying at least one first audiovisual file in response to the first input, the method further comprises:
receiving a second input to a second audio-video file of at least one of the first audio-video files;
and in response to the second input, playing the second audio-video file, wherein a progress bar of the second audio-video file comprises a first scene identifier, and the first scene identifier is used for indicating a video clip associated with the first scene information.
3. The method of claim 1, wherein after displaying at least one first audiovisual file in response to the first input, the method further comprises:
Receiving a third input to a third audio-video file of at least one of the first audio-video files;
playing the first video clip in response to the third input; or, storing the first video clip;
the first video clip is a video clip associated with the first scene information in the third audio/video file.
4. The method of claim 1, wherein prior to displaying the at least one folder identification, the method further comprises:
receiving a fourth input to the first image;
determining, in response to the fourth input, the first scene information associated with the first image;
determining at least one first audio-video file from a plurality of audio-video files according to the first scene information;
and moving at least one first audio and video file into a first folder indicated by the first folder identification.
5. The method of claim 1, wherein after displaying at least one first audiovisual file in response to the first input, the method further comprises:
receiving a fifth input to a fourth audio-video file of the at least one audio-video file;
Responsive to the fifth input, dividing the fourth audiovisual file into at least one video segment;
and respectively moving at least one video clip into a folder corresponding to the video clip.
6. The method of claim 1, wherein the at least one folder identification comprises: at least one audio folder identification; the method further comprises the steps of:
receiving a sixth input of a second folder identification of at least one of the audio folder identifications;
in response to the sixth input, at least one first audio file is displayed, key audio frames in the first audio file being associated with second scene information, the second folder identification being used to indicate the second scene information.
7. The method of claim 6, wherein prior to receiving a sixth input of a second folder identification of at least one of the audio folder identifications, the method further comprises:
receiving a seventh input of the first audio information;
determining, in response to the seventh input, the second scene information associated with the first audio information;
determining at least one first audio file from a plurality of audio files according to the second scene information;
And moving at least one first audio file into a second folder indicated by the second folder identification.
8. The method of claim 7, wherein receiving a seventh input for the first audio information comprises:
receiving a seventh input for the first audio information and audio characteristics, the audio characteristics including at least one of: sound source type, volume level, and scene type;
the determining, in response to the seventh input, the second scene information associated with the first audio information, comprising:
in response to the seventh input, the second scene information is determined from the first audio information and the audio features.
9. A document display apparatus, the apparatus comprising:
the display module is used for displaying at least one folder identifier, and the folder identifier is associated with the scene information;
the receiving module is used for receiving a first input of a first folder identifier in at least one folder identifier, and the first folder identifier is associated with first scene information;
the display module is further configured to display at least one first audio-video file in response to the first input, where a key video frame in the first audio-video file is associated with the first scene information.
10. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the file display method of any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the file display method according to any one of claims 1 to 8.
CN202311760619.8A 2023-12-19 2023-12-19 File display method and device, electronic equipment and readable storage medium Pending CN117676056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311760619.8A CN117676056A (en) 2023-12-19 2023-12-19 File display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311760619.8A CN117676056A (en) 2023-12-19 2023-12-19 File display method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117676056A true CN117676056A (en) 2024-03-08

Family

ID=90080830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311760619.8A Pending CN117676056A (en) 2023-12-19 2023-12-19 File display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117676056A (en)

Similar Documents

Publication Publication Date Title
US11206448B2 (en) Method and apparatus for selecting background music for video shooting, terminal device and medium
US10739958B2 (en) Method and device for executing application using icon associated with application metadata
JP2020030814A (en) Method and apparatus for processing information
CN110019933A (en) Video data handling procedure, device, electronic equipment and storage medium
CN109547841B (en) Short video data processing method and device and electronic equipment
CN107870999B (en) Multimedia playing method, device, storage medium and electronic equipment
CN112752121B (en) Video cover generation method and device
CN104038774B (en) Generate the method and device of ring signal file
CN113918522A (en) File generation method and device and electronic equipment
CN111703278B (en) Fragrance release method, device, vehicle end, cloud end, system and storage medium
CN105843865B (en) Media file playing method and device
WO2017107887A1 (en) Method and apparatus for switching group picture on mobile terminal
CN112287141A (en) Photo album processing method and device, electronic equipment and storage medium
EP4343579A1 (en) Information replay method and apparatus, electronic device, computer storage medium, and product
CN116017043A (en) Video generation method, device, electronic equipment and storage medium
CN103927334B (en) Webpage acquisition methods and device
WO2015193748A1 (en) Information acquiring apparatus and method, and electronic device
CN117676056A (en) File display method and device, electronic equipment and readable storage medium
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN112463827B (en) Query method, query device, electronic equipment and storage medium
CN112261483B (en) Video output method and device
CN108228773A (en) A kind of method and device realized information and preserved
CN111698563A (en) Content sending method and device based on AI virtual anchor and storage medium
CN111435525A (en) Reading plan determining method, device, equipment, server and storage medium
WO2012110690A1 (en) Method apparatus and computer program product for prosodic tagging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination