CN105681894A - Device and method for displaying video file - Google Patents

Device and method for displaying video file Download PDF

Info

Publication number
CN105681894A
CN105681894A CN201610004162.1A CN201610004162A CN105681894A CN 105681894 A CN105681894 A CN 105681894A CN 201610004162 A CN201610004162 A CN 201610004162A CN 105681894 A CN105681894 A CN 105681894A
Authority
CN
China
Prior art keywords
video file
video
display
video segment
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610004162.1A
Other languages
Chinese (zh)
Inventor
刘林汶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610004162.1A priority Critical patent/CN105681894A/en
Publication of CN105681894A publication Critical patent/CN105681894A/en
Priority to PCT/CN2016/113751 priority patent/WO2017118353A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a device for displaying a video file. The device comprises that when the video file is read, an obtaining module is used for decoding the video file and obtaining the key frame data of the video file; an extraction module is used for extracting wonderful video fragments in the video file according a preset rule based on the key frame data; and a display module is used for displaying the played wonderful video fragments when a display instruction of the video file is received. The invention also discloses a method for displaying the video file. According to the device and the method provided by the invention, compared with static video thumbnails, more information related to the video file can be provided for the user; the user can select whether to watch the video file according to the played wonderful video fragments when the video file is displayed; and therefore, the user can be guided well to watch the favorite video.

Description

The device and method of display video file
Technical field
The present invention relates to communication technical field, particularly relate to the device and method of a kind of display video file.
Background technology
At present, when presenting video file in terminal, being generally that the form by video thumbnails presents video file, user selects by the content in video thumbnails to click viewing video file content. But, utilize this static images of video thumbnails to carry out display video file and can only show the very limited information of this video file, user cannot determine whether according to this video thumbnails to need to watch this video file, thus cannot well guide user to select to watch the video liked.
Summary of the invention
The main purpose of the present invention is to propose the device and method of a kind of display video file, it is intended to guide user to select to watch the video liked better.
For achieving the above object, the device of a kind of display video file provided by the invention, the device of described display video file comprises:
Acquisition module, for when reading video file, decoding described video file, obtain the key frame data of described video file;
Extraction module, for based on described key frame data, extracting the excellent video segment in described video file by preset rules;
Display module, for when receiving the display instruction of described video file, showing the described excellent video segment play.
Can selection of land, described extraction module also for:
Generate the histogram of each key frame data; Calculate the graphic information difference value between the histogram of adjacent key frame data; Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset; The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
Can selection of land, described acquisition module also for:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Described extraction module also for:
Generate the oscillogram that every second audio frequency data are corresponding;Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data; Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset; Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
Can selection of land, described display module also for:
Set up the mapping relation between described excellent video segment and described video file; When receiving the display instruction of described video file, transfer described excellent video segment according to described mapping relation, and play described excellent video segment to show described video file.
Can selection of land, the device of described display video file also comprises:
Playing module, for when receiving the click commands of the described excellent video segment play, playing described video file.
In addition, for achieving the above object, the present invention also proposes a kind of method of display video file, and the method for described display video file comprises the following steps:
When reading video file, described video file is decoded, obtains the key frame data of described video file;
Based on described key frame data, extract the excellent video segment in described video file by preset rules;
When receiving the display instruction of described video file, the described excellent video segment that display is play.
Can selection of land, described based on described key frame data, the step extracting the excellent video segment in described video file by preset rules comprises:
Generate the histogram of each key frame data;
Calculate the graphic information difference value between the histogram of adjacent key frame data;
Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset;
The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
Can selection of land, described when read video file time, described video file is decoded, the step of the key frame data obtaining described video file also comprises:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Described based on described key frame data, the step extracting the excellent video segment in described video file by preset rules also comprises:
Generate the oscillogram that every second audio frequency data are corresponding;
Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data;
Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset;
Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
Can selection of land, described when receiving the display instruction of described video file, the step of the described excellent video segment that display is play comprises:
Set up the mapping relation between described excellent video segment and described video file;
When receiving the display instruction of described video file, transfer described excellent video segment according to described mapping relation, and play described excellent video segment to show described video file.
Can selection of land, described when receiving the display instruction of described video file, also comprise after the step of the described excellent video segment that display is play:
When receiving the click commands of the described excellent video segment play, play described video file.
The device and method of the display video file that the present invention proposes, when reading video file, extracts the excellent video segment in described video file based on described video file carries out decode the key frame data obtained; When receiving the display instruction of described video file, the described excellent video segment that display is play. Owing to carrying out display video file when presenting video file by the form of the excellent video segment to play this video file, comparing static video thumbnails can be the relevant information of user's offer this video files more, select to select to guide user better to watch the video liked the need of this video file of viewing according to the excellent video segment play during this video file of display for user.
Accompanying drawing explanation
Fig. 1 is the hardware architecture diagram of the optional mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the electrical structure block diagram of camera in Fig. 1;
Fig. 3 is the high-level schematic functional block diagram of device first embodiment of display video file of the present invention;
Fig. 4 is the high-level schematic functional block diagram of device the 2nd embodiment of display video file of the present invention;
Fig. 5 is the schematic flow sheet of method first embodiment of display video file of the present invention;
Fig. 6 be display video file of the present invention method first embodiment in obtain in described video file the schematic diagram of excellent time period;
Fig. 7 is the schematic flow sheet of method the 2nd embodiment of display video file of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
It is to be understood that specific embodiment described herein is only in order to explain the present invention, it is not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing. In follow-up description, it may also be useful to for representing that the suffix of such as " module ", " parts " or " unit " of element is only in order to be conducive to the explanation of the present invention, itself is specific meaning not. Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners. Such as, the terminal described in the present invention can comprise the mobile terminal of such as mobile telephone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), Nvgtl aids etc. and the fixed terminal of such as numeral TV, desk-top computer etc. Below, it is assumed that terminal is mobile terminal. But, skilled person will appreciate that, except being used in particular for the element of mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware structure signal of the optional mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) inputs unit 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180, power subsystem 190 and video decoding unit 200 etc.Fig. 1 shows the mobile terminal with various assembly, but it is understood that do not require to implement all assemblies illustrated. Can alternatively implement more or less assembly. Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and radio communication device or network. Such as, wireless communication unit can comprise at least one in mobile communication module 112, wireless Internet module 113, short distance communication module 114 and positional information module 115.
Mobile communication module 112 such as, tick is sent in base station (access point, Node B etc.), exterior terminal and server at least one and/or from its receive tick. Various types of data that such tick can comprise voice call signal, video speech signal or send according to text and/or multi-media message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal. This module can be couple to terminal innerly or outside. Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short distance communication module 114 is the module for supporting short distance to communicate. Some examples of short-range communication technology comprise bluetoothTM, RF identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybeeTMEtc..
Positional information module 115 is the module for checking or obtain the positional information of mobile terminal. The typical case of positional information module is GPS (global pick device). According to current technology, GPS module 115 calculate from three or more satellite range information and accurately time information and for the Information application triangulation method calculated, thus according to longitude, latitude with highly accurately calculate three-dimensional current location information. Currently, for calculate the method for position and time information and use three satellites and with the use of the error of other position that satellite correction calculation goes out and time information. In addition, GPS module 115 can carry out computing velocity information by Continuous plus current location information in real time.
A/V inputs unit 120 for receiving audio or video signal. A/V inputs unit 120 can comprise camera 121 and microphone 122, and the view data of the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video is processed by camera 121. Image frame after process may be displayed on display unit 151. Image frame after camera 121 processes can be stored in and send in storer 160 (or other storage media) or via wireless communication unit 110, it is possible to provide two or more cameras 121 according to the structure of mobile terminal. Such acoustic processing can via microphones sound (audio frequency data) in phone call mode, record pattern, speech recognition pattern etc. operational mode, and can be audio frequency data by microphone 122. Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when phone call mode. Microphone 122 can implement various types of noise elimination (or suppression) algorithm to eliminate noise or the interference of (or suppression) generation in the process receiving and sending sound signal.
Video decoding unit 200, for original video is carried out decoding process, obtains original video through decoded Voice & Video data.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input. User input unit 130 allows user to input various types of information, and can comprise keyboard, the young sheet of pot, touch pad (such as, detect cause owing to being touched resistance, pressure, electric capacity etc. the sensitive component of change), roller, rocking bar etc. Especially, when touch pad is superimposed upon on display unit 151 as a layer, it is possible to form touch-screen.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration of mobile terminal 100 or speed is moved and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100. Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can be felt this sliding-type phone of survey and open or close. In addition, sensing cell 140 can detect power subsystem 190 and whether provide whether electric power or interface unit 170 couple with outer part device. Interface unit 170 is used as at least one outer part device and is connected the interface that can pass through with mobile terminal 100. Such as, outer part device can comprise wired or wireless head-band earphone port, outside power supply (or battery charger) port, wired or wireless data port, memory card port, for connect there is the device identifying module port, audio frequency I/O (I/O) port, video i/o port, ear port etc. Identify that module can be store for verifying that user uses the various information of mobile terminal 100 and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc. In addition, having the form identifying that the device (hereinafter referred to " means of identification ") of module can take smart card, therefore, means of identification can be connected with mobile terminal 100 via port or other coupling device. Such as, interface unit 170 may be used for receiving from the input (data information, electric power etc.) of outer part device and the one or more element being transferred in mobile terminal 100 by the input received or may be used for transmitting data between mobile terminal and outer part device.
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it from base, electric power be provided to the path of mobile terminal 100 or can be used as to allow the various command signals from base input be transferred to the path of mobile terminal by it. The signal identifying whether mobile terminal is arranged on base accurately can be used as from the various command signal of base input or electric power. Such as, output unit 150 is constructed to provide output signal (sound signal, vision signal, warning signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.
Output unit 150 can comprise display unit 151, dio Output Modules 152 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100. Such as, when mobile terminal 100 is in phone call mode, such as, display unit 151 can show communicate with call or other (text messaging, multimedia file download etc.) relevant user interface (UI) or graphic user interface (GUI).When being in video call mode or image capture mode when mobile terminal 100, display unit 151 can the image of display capture and/or reception image, video is shown or UI or GUI of image and correlation function etc.
Meanwhile, when superposition is to form touch-screen each other as a layer for display unit 151 and touch pad, display unit 151 can be used as input unit and take-off equipment. Display unit 151 can comprise at least one in liquid-crystal display (LCD), thin film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) indicating meter, flexible display, three-dimensional (3D) indicating meter etc. Some in these indicating meters can be constructed to transparent shape, and to allow, user watches from outside, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) indicating meter etc. According to the specific enforcement mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown). Touch-screen can be used for detection touch input pressure and touch input position and touch and inputs area.
Dio Output Modules 152 can when mobile terminal be under call signal receiving mode, call mode, record pattern, speech recognition pattern, the broadcast isotype such as receiving mode, audio frequency data convert audio signals that is that wireless communication unit 110 is received or that store in storer 160 and to export be sound. And, such as, the audio frequency that the specific function that dio Output Modules 152 can provide to mobile terminal 100 performs is relevant exports (call signal receives sound, message reception sound etc.). Dio Output Modules 152 can comprise sound pick-up, hummer etc.
Software program that storer 160 can store the process performed by controller 180 and control operates etc., or can temporarily store data that oneself maybe will export through exporting (such as, telephone book, message, static image, video etc.). And, storer 160 can store about the vibration of various modes exported when touching and be applied to touch-screen and the data of sound signal.
Storer 160 can comprise the storage media of at least one type, such as, described storage media comprises flash memory, hard disk, multi-media card, card-type storer (SD or DX storer etc.), accesses storer (RAM), static random-access memory (SRAM), read-only storage (ROM), electrically erasable read-only storage (EEPROM), programmable read only memory (PROM), magnetic storage device, disk, CD etc. at random. And, mobile terminal 100 can be connected the network storage device cooperation of the storage function performing storer 160 with by network.
Controller 180 controls the overall operation of mobile terminal usually. Such as, controller 180 performs and voice call, data corresponding, video call etc. relevant control and process. In addition, controller 180 can comprise the multi-media module 181 for reproducing (or playback) multi-medium data, and multi-media module 181 can be configured in controller 180, or can be configured to be separated with controller 180. Controller 180 can pattern recognition process, and is identified as character or image so that the handwriting input performed on the touchscreen or picture are drawn input.
Power subsystem 190 receives external power or internal power under the control of controller 180 and offer operates the suitable electric power needed for each element and assembly.
Various enforcement mode described herein can to use the computer-readable medium of such as computer software, hardware or its any combination to implement. Hardware is implemented, enforcement mode described herein can be implemented with the use of specific end use unicircuit (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field-programmable gate array (FPGA), treater, controller, microcontroller, microprocessor, at least one that is designed to perform in the electronic unit of function described herein, in some cases, such enforcement mode can be implemented in controller 180. For implement software, the enforcement mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation. Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and perform by controller 180.
So far, oneself is through the mobile terminal according to its functional description. Below, for the sake of brevity, by the slide type mobile terminal in the various types of mobile terminals describing such as folding type, straight-plate type, oscillating-type, slide type mobile terminal etc. exemplarily. Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
With reference to the electrical structure block diagram that Fig. 2, Fig. 2 are camera in Fig. 1.
Photography camera lens 1211 is made up of the multiple optical lens for the formation of shot object image, is single-focus lens or zoom lens. Photography camera lens 1211 can move in the direction of the optical axis under the control of lens driver 1221, lens driver 1221 is according to the control signal from lens driving pilot circuit 1222, the focus position of control photography camera lens 1211, when zoom lens, it is possible to control focus distance. Lens driving pilot circuit 1222 carries out the drived control of lens driver 1221 according to the control command from minicomputer 1217.
Shooting element 1212 it is configured with near the position of the shot object image formed on the optical axis of photography camera lens 1211, by photography camera lens 1211. Shooting element 1212 for being made a video recording by shot object image and obtains image data. Making a video recording two-dimentional on element 1212 and it is being arranged in a matrix the photorectifier forming each pixel. Each photorectifier produces the opto-electronic conversion electric current corresponding with by light quantity, and this opto-electronic conversion electric current carries out electric charge accumulation by the electrical condenser being connected with each photorectifier. The front surface of each pixel is configured with the RGB colour filter of Bayer arrangement.
Shooting element 1212 is connected with imaging circuit 1213, this imaging circuit 1213 carries out electric charge accumulation control in shooting element 1212 and figure image signal reads control, the figure image signal (analog picture signal) of this reading is reduced and after resetting noise, carries out wave shaping, and then carry out gain raising etc. to become suitable signal level. Imaging circuit 1213 is connected with A/D converter 1214, and analog picture signal is carried out analog to digital conversion by this A/D converter 1214, to bus 1227 output digital image signal (hereinafter referred to as view data).
Bus 1227 is the transmission path of the various data that the inside for being transmitted in camera reads or generates. Above-mentioned A/D converter 1214 is connected in bus 1227, connect image procossing device 1215, jpeg processor 1216, minicomputer 1217, SDRAM (SynchronousDynamicrandomaccessmemory in addition, synchronous dynamic random-access internal memory) 1218, storer interface (hereinafter referred to as memory I/F) 1219, LCD (LiquidCrystalDisplay, liquid-crystal display) driving mechanism 1220.
The view data of the output based on shooting element 1212 is carried out OB and subtracts each other the various image procossing such as process, blank level adjustment, color matrix computing, gamma conversion, aberration signal processing, noise removal process, change process simultaneously, Edge Finish by image procossing device 1215. Jpeg processor 1216, when by Imagery Data Recording in recording medium 1225, compresses the view data read from SDRAM1218 according to JPEG compression mode. In addition, jpeg processor 1216 carries out the decompression of jpeg image data to carry out image to reproduce display. When decompressing, read the file being recorded in recording medium 1225, after implementing decompression in jpeg processor 1216, the view data of decompression is temporarily stored in SDRAM1218 and shows on LCD1226. H.264 in addition, in the present embodiment, what adopt as compression of images decompression mode is JPEG mode, but compressed and decompressed mode is not limited to this, it is of course possible to adopt MPEG, TIFF, other the compressed and decompressed mode such as.
Minicomputer 1217 plays the function in the control portion as this camera entirety, the various process sequences of unified control camera. Minicomputer 1217 connects operation unit 1223 and flash memory 1224.
Operation unit 1223 includes but not limited to physical button or virtual key, this entity or virtual key can be the operational controls such as various input button and various input keys such as power knob, key of taking pictures, edit key, dynamically image button, reproduction button, menu button, cross key, OK button, deletion button, amplification button, detect the operational stage of these operational controls.
Detected result is exported to minicomputer 1217. In addition, the front surface at the LCD1226 as indicating meter is provided with touch panel, and the touch location of detection user, exports this touch location to minicomputer 1217. Minicomputer 1217, according to the detected result of the work point from operation unit 1223, performs the various process sequences corresponding with the operation of user.
Flash memory 1224 stores the program of the various process sequences for performing minicomputer 1217. Minicomputer 1217 carries out the control of camera entirety according to this program. In addition, flash memory 1224 stores the various adjusted values of camera, and minicomputer 1217 reads adjusted value, carries out the control of camera according to this adjusted value.
SDRAM1218 is can the volatile memory rewritten of electricity for what view data etc. carried out temporarily storage. This SDRAM1218 temporarily stores the view data from A/D converter 1214 output and has carried out the view data after processing image procossing device 1215, jpeg processor 1216 etc.
Storer interface 1219 is connected with recording medium 1225, carries out the control of view data and the first-class data write recording medium 1225 of file being attached in view data and reading from recording medium 1225. Recording medium 1225 be such as can on camera main-body the recording medium such as storer card of disassembled and assembled freely, but be not limited to this, it is also possible to be the hard disk etc. being built in camera main-body.
Lcd driver 1210 is connected with LCD1226, it is stored in SDRAM1218 by the view data after image procossing device 1215 processes, when needing display, read the view data that SDRAM1218 stores also to show on LCD1226, or, the compressed view data of jpeg processor 1216 is stored in SDRAM1218, when needs show, jpeg processor 1216 reads the compressed view data of SDRAM1218, then decompresses, and the view data after decompressing is shown by LCD1226.
The back side that LCD1226 is configured in camera main-body carries out image display. This LCD1226LCD), but it is not limited to this, it is also possible to adopt the various display panels (LCD1226) such as organic EL, but it is not limited to this, it is also possible to adopt the various display panels such as organic EL.
Based on the electrical structure schematic diagram of above-mentioned mobile terminal hardware structure and camera, it is proposed to each embodiment of device of display video file of the present invention.
As shown in Figure 3, in device first embodiment of display video file of the present invention, the device of this display video file comprises:
Acquisition module 01, for when reading video file, decoding described video file, obtain the key frame data of described video file;
In the present embodiment, join when video playback device reads when there being video file, utilize various video decoding techniques to be decoded by described video file, remove the redundant data in original video data such as non-key frame data, obtain the key frame data of described video file. Wherein, frame is exactly the single width image frame of least unit in animation, is equivalent to each lattice camera lens on cinefilm, and on the time axle of animation software, frame shows as lattice or a mark. Crucial frame is equivalent to the original painting in 2 D animation, refers to that frame residing for key operations in role or object of which movement or change. The key frame data of described video file obtained in the present embodiment is the frame data residing for key operations in the role in described video file or object of which movement or change, can embody the main contents in described video file.
Extraction module 02, for based on described key frame data, extracting the excellent video segment in described video file by preset rules;
After getting the key frame data of the main contents that can embody described video file, the key operations in the role in described video file or object of which movement or change is included due to the key frame data of described video file, namely the excellent video content in described video file is included, therefore, the excellent video segment in described video file can be extracted according to described key frame data.
In one embodiment, owing to the information such as the scene of the excellent video section in video file under normal circumstances, personage, action all compare abundant, corresponding key frame data amount is also bigger, therefore, the excellent video section in video file can be identified according to the quantity of information size of described key frame data. As the key frame data that in video file, quantity of information is bigger can be obtained, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file, also some identical time periods can be divided, obtain the time period that the quantity of information sum of key frame data is bigger, and from described video file, extract some video segments corresponding to this time period, some video segments carry out encoding, synthesizing the excellent video segment of described video file.
In another embodiment, owing to the excellent video section content in video file under normal circumstances all rises and falls free and easy, before and after image widely different, therefore, the excellent video section in video file can be identified according to the quantity of information difference value size of adjacent key frame data. As the bigger adjacent key frame data of quantity of information difference value in video file can be obtained, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file, also some identical time periods can be divided, obtain the time period that the quantity of information difference value sum of adjacent key frame data is bigger, and from described video file, extract some video segments corresponding to this time period, some video segments carry out encoding, synthesizing the excellent video segment of described video file.
In addition, owing to the excellent video section in video file under normal circumstances is generally positioned at centre or ending, therefore, also can obtain in video file the key frame data being in default intermediary time period or ending time period, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file; Or from described video file, directly extract the video segment being in default intermediary time period or ending time period, as excellent video segment.
Display module 03, for when receiving the display instruction of described video file, showing the described excellent video segment play.
After extracting the excellent video segment in described video file according to described key frame data, the mapping relation between described excellent video segment and described video file can be set up, described excellent video segment and described video file are carried out mapping association. Like this, when needs show described video file, can transfer and the excellent video segment of described video file mapping association according to the described mapping relation set up, and play described excellent video segment, to utilize the described excellent video segment of broadcasting to show described video file. User can know the excellent video content of described video file according to the described excellent video segment play, thus can determine whether to need to watch all videos content of described video file very soon, reach and guide user to select the object watching video when showing described video file better by the excellent video segment of the described video file play.
In the present embodiment when reading video file, extract the excellent video segment in described video file based on described video file being carried out decode the key frame data obtained; When receiving the display instruction of described video file, the described excellent video segment that display is play. Owing to carrying out display video file when presenting video file by the form of the excellent video segment to play this video file, comparing static video thumbnails can be the relevant information of user's offer this video files more, select to select to guide user better to watch the video liked the need of this video file of viewing according to the excellent video segment play during this video file of display for user.
Further, in other embodiments, said extracted module 02 can also be used for:
Generate the histogram of each key frame data; Calculate the graphic information difference value between the histogram of adjacent key frame data; Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset; The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
In the present embodiment, after described video file is carried out decodes the key frame data obtaining described video file, the histogram of each key frame data can be generated, and calculate the graphic information difference value between the histogram of adjacent key frame data. Wherein, this graphic information difference value is determined by histogrammic distribution and the area of adjacent key frame data. Being divided in some unit time sections by the key frame data of described video file, the length such as this unit time section can be set as 10s, comprises all key frame data in 10s in this unit time section.Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the per unit time period, if the length when unit time section is 10s, when there is a crucial frame each second, histogram 1, histogram 2 is had in order successively in this unit time section ... histogram 10, then in this unit time section each adjacent key frame data histogram between accumulative graphic information difference value=(histogram 2-histogram 1)+(histogram 3-histogram 2)+... + (histogram 10-histogram 9).
Owing to the excellent video section content in video file under normal circumstances all rises and falls free and easy, before and after image widely different, therefore, the possibility that the video segment corresponding to unit time section that accumulative graphic information difference value is more big is excellent video in video file is also more big. In the present embodiment, the first difference threshold value can be set in advance, if the accumulative graphic information difference value of a certain unit time section is greater than default first difference threshold value, then identify that this video segment corresponding to unit time section is excellent video in video file. Some unit time sections that accumulative graphic information difference value is greater than default first difference threshold value can be obtained as the excellent time period, and in described video file, extract some excellent some video segments corresponding to the time period, the some video segments extracted are carried out coding and synthesizes an excellent video segment.
Further, in other embodiments, above-mentioned acquisition module 01 can also be used for:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Said extracted module 02 can also be used for:
Generate the oscillogram that every second audio frequency data are corresponding; Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data; Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset; Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
In the present embodiment, if described video file comprises audio frequency, then while described video file is carried out decodes the key frame data obtaining described video file, also obtain the audio frequency data of described video file, the oscillogram that every second audio frequency data are corresponding can be generated, and calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data. Wherein, this audio-frequency information difference value can do difference computing according to the oscillogram of adjacent tone audio data. By, in the audio frequency Data Placement of described video file to some unit time sections, the length such as this unit time section can be set as 10s, comprises all audio frequency data in 10s in this unit time section. Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the per unit time period, if the length when unit time section is 10s, oscillogram 1, oscillogram 2 is had in order successively in this unit time section ... waveform Figure 10, then in this unit time section each adjacent tone audio data oscillogram between accumulative audio-frequency information difference value=(oscillogram 2-oscillogram 1)+(oscillogram 3-oscillogram 2)+... + (oscillogram 10-oscillogram 9).
Owing to the excellent video section content in video file under normal circumstances rises and falls free and easy, the waveform fluctuation of front and back audio frequency is also bigger, therefore, the possibility that the video segment corresponding to unit time section that accumulative audio-frequency information difference value is more big is excellent video in video file is also more big.In the present embodiment, when considering accumulative graphic information difference value and accumulative audio-frequency information difference value simultaneously, the 2nd difference threshold value can be set in advance, if the accumulative graphic information difference value of a certain unit time section or accumulative audio-frequency information difference value are greater than default 2nd difference threshold value, then identify that this video segment corresponding to unit time section is excellent video in video file. Can obtain accumulative graphic information difference value or some unit time sections that accumulative audio-frequency information difference value is greater than default first difference threshold value as the excellent time period, and in described video file, extract some excellent some video segments corresponding to the time period, the some video segments extracted are carried out coding and synthesizes an excellent video segment.
Owing to considering that front and back image difference is very big or relatively big these two the excellent videos because usually determining in video file of audio frequency fluctuation simultaneously in the present embodiment, can more accurately extract the excellent video in described video file, thus provide in video file excellent video more accurately to watch, to guide user to select, the video liked.
Further, in other embodiments, the key frame data of described video file and audio frequency data can be divided into 10 identical time periods respectively, it is labeled as 1,2 ... 10, accumulative graphic information difference value corresponding to this time period and accumulative audio-frequency information difference value is marked respectively above each time period of key frame data and audio frequency data, accumulative graphic information difference value such as time period 1 correspondence of key frame data is 50, and the accumulative audio-frequency information difference value of time period 1 correspondence of audio frequency data is 30 etc. Setting the 2nd difference threshold value in advance is 50, if the accumulative graphic information difference value of a certain unit time section or accumulative audio-frequency information difference value are greater than default 2nd difference threshold value 50, then identify that this video segment corresponding to unit time section is excellent video in video file. In the present embodiment, accumulative graphic information difference value or accumulative audio-frequency information difference value are greater than the free section 1 of default 2nd difference threshold value 50, time period 4, time period 5, time period 8 and time period 10, then using time period 1, time period 4, time period 5, time period 8 and time period 10 as the excellent time period, and in described video file extraction time section 1, the time period 4, the time period 5,5 video segments corresponding to time period 8 and time period 10,5 video segments extracted are carried out coding and synthesize an excellent video segment.
As shown in Figure 4, second embodiment of the invention proposes the device of a kind of display video file, on the basis of above-described embodiment, also comprises:
Playing module 04, for when receiving the click commands of the described excellent video segment play, playing described video file.
In the present embodiment when utilizing the described excellent video segment play to show described video file, if receiving the click commands to the described excellent video segment play, then judge that user needs all videos content watching described video file, then play described video file, so that user can select the need of this video file of viewing according to the excellent video segment play during this video file of display, and when user needs to watch this video file, all videos content of described video file can be play.
The present invention further provides a kind of method of display video file, with reference to Fig. 5, in method first embodiment of display video file of the present invention, the method for this display video file comprises:
Step S10, when reading video file, decodes described video file, obtains the key frame data of described video file;
In the present embodiment, join when video playback device reads when there being video file, utilize various video decoding techniques to be decoded by described video file, remove the redundant data in original video data such as non-key frame data, obtain the key frame data of described video file. Wherein, frame is exactly the single width image frame of least unit in animation, is equivalent to each lattice camera lens on cinefilm, and on the time axle of animation software, frame shows as lattice or a mark. Crucial frame is equivalent to the original painting in 2 D animation, refers to that frame residing for key operations in role or object of which movement or change. The key frame data of described video file obtained in the present embodiment is the frame data residing for key operations in the role in described video file or object of which movement or change, can embody the main contents in described video file.
Step S20, based on described key frame data, extracts the excellent video segment in described video file by preset rules;
After getting the key frame data of the main contents that can embody described video file, the key operations in the role in described video file or object of which movement or change is included due to the key frame data of described video file, namely the excellent video content in described video file is included, therefore, the excellent video segment in described video file can be extracted according to described key frame data.
In one embodiment, owing to the information such as the scene of the excellent video section in video file under normal circumstances, personage, action all compare abundant, corresponding key frame data amount is also bigger, therefore, the excellent video section in video file can be identified according to the quantity of information size of described key frame data. As the key frame data that in video file, quantity of information is bigger can be obtained, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file, also some identical time periods can be divided, obtain the time period that the quantity of information sum of key frame data is bigger, and from described video file, extract some video segments corresponding to this time period, some video segments carry out encoding, synthesizing the excellent video segment of described video file.
In another embodiment, owing to the excellent video section content in video file under normal circumstances all rises and falls free and easy, before and after image widely different, therefore, the excellent video section in video file can be identified according to the quantity of information difference value size of adjacent key frame data. As the bigger adjacent key frame data of quantity of information difference value in video file can be obtained, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file, also some identical time periods can be divided, obtain the time period that the quantity of information difference value sum of adjacent key frame data is bigger, and from described video file, extract some video segments corresponding to this time period, some video segments carry out encoding, synthesizing the excellent video segment of described video file.
In addition, owing to the excellent video section in video file under normal circumstances is generally positioned at centre or ending, therefore, also can obtain in video file the key frame data being in default intermediary time period or ending time period, and the some key frame data obtained carry out encoding, synthesizing the excellent video segment of described video file; Or from described video file, directly extract the video segment being in default intermediary time period or ending time period, as excellent video segment.
Step S30, when receiving the display instruction of described video file, the described excellent video segment that display is play.
After extracting the excellent video segment in described video file according to described key frame data, the mapping relation between described excellent video segment and described video file can be set up, described excellent video segment and described video file are carried out mapping association. Like this, when needs show described video file, can transfer and the excellent video segment of described video file mapping association according to the described mapping relation set up, and play described excellent video segment, to utilize the described excellent video segment of broadcasting to show described video file. User can know the excellent video content of described video file according to the described excellent video segment play, thus can determine whether to need to watch all videos content of described video file very soon, reach and guide user to select the object watching video when showing described video file better by the excellent video segment of the described video file play.
In the present embodiment when reading video file, extract the excellent video segment in described video file based on described video file being carried out decode the key frame data obtained; When receiving the display instruction of described video file, the described excellent video segment that display is play. Owing to carrying out display video file when presenting video file by the form of the excellent video segment to play this video file, comparing static video thumbnails can be the relevant information of user's offer this video files more, select to select to guide user better to watch the video liked the need of this video file of viewing according to the excellent video segment play during this video file of display for user.
Further, in other embodiments, above-mentioned steps S20 can comprise:
Generate the histogram of each key frame data;
Calculate the graphic information difference value between the histogram of adjacent key frame data;
Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset;
The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
In the present embodiment, after described video file is carried out decodes the key frame data obtaining described video file, the histogram of each key frame data can be generated, and calculate the graphic information difference value between the histogram of adjacent key frame data. Wherein, this graphic information difference value is determined by histogrammic distribution and the area of adjacent key frame data. Being divided in some unit time sections by the key frame data of described video file, the length such as this unit time section can be set as 10s, comprises all key frame data in 10s in this unit time section. Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the per unit time period, if the length when unit time section is 10s, when there is a crucial frame each second, histogram 1, histogram 2 is had in order successively in this unit time section ... histogram 10, then in this unit time section each adjacent key frame data histogram between accumulative graphic information difference value=(histogram 2-histogram 1)+(histogram 3-histogram 2)+... + (histogram 10-histogram 9).
Owing to the excellent video section content in video file under normal circumstances all rises and falls free and easy, before and after image widely different, therefore, the possibility that the video segment corresponding to unit time section that accumulative graphic information difference value is more big is excellent video in video file is also more big. In the present embodiment, the first difference threshold value can be set in advance, if the accumulative graphic information difference value of a certain unit time section is greater than default first difference threshold value, then identify that this video segment corresponding to unit time section is excellent video in video file. Some unit time sections that accumulative graphic information difference value is greater than default first difference threshold value can be obtained as the excellent time period, and in described video file, extract some excellent some video segments corresponding to the time period, the some video segments extracted are carried out coding and synthesizes an excellent video segment.
Further, in other embodiments, above-mentioned steps S10 can comprise:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Above-mentioned steps S20 can comprise:
Generate the oscillogram that every second audio frequency data are corresponding;
Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data;
Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset;
Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
In the present embodiment, if described video file comprises audio frequency, then while described video file is carried out decodes the key frame data obtaining described video file, also obtain the audio frequency data of described video file, the oscillogram that every second audio frequency data are corresponding can be generated, and calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data. Wherein, this audio-frequency information difference value can do difference computing according to the oscillogram of adjacent tone audio data. By, in the audio frequency Data Placement of described video file to some unit time sections, the length such as this unit time section can be set as 10s, comprises all audio frequency data in 10s in this unit time section. Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the per unit time period, if the length when unit time section is 10s, oscillogram 1, oscillogram 2 is had in order successively in this unit time section ... waveform Figure 10, then in this unit time section each adjacent tone audio data oscillogram between accumulative audio-frequency information difference value=(oscillogram 2-oscillogram 1)+(oscillogram 3-oscillogram 2)+... + (oscillogram 10-oscillogram 9).
Owing to the excellent video section content in video file under normal circumstances rises and falls free and easy, the waveform fluctuation of front and back audio frequency is also bigger, therefore, the possibility that the video segment corresponding to unit time section that accumulative audio-frequency information difference value is more big is excellent video in video file is also more big. In the present embodiment, when considering accumulative graphic information difference value and accumulative audio-frequency information difference value simultaneously, the 2nd difference threshold value can be set in advance, if the accumulative graphic information difference value of a certain unit time section or accumulative audio-frequency information difference value are greater than default 2nd difference threshold value, then identify that this video segment corresponding to unit time section is excellent video in video file.Can obtain accumulative graphic information difference value or some unit time sections that accumulative audio-frequency information difference value is greater than default first difference threshold value as the excellent time period, and in described video file, extract some excellent some video segments corresponding to the time period, the some video segments extracted are carried out coding and synthesizes an excellent video segment.
Owing to considering that front and back image difference is very big or relatively big these two the excellent videos because usually determining in video file of audio frequency fluctuation simultaneously in the present embodiment, can more accurately extract the excellent video in described video file, thus provide in video file excellent video more accurately to watch, to guide user to select, the video liked.
As shown in Figure 6, Fig. 6 be display video file of the present invention method first embodiment in obtain in described video file the schematic diagram of excellent time period.
In the present embodiment, the key frame data of described video file and audio frequency data are divided into 10 identical time periods respectively, it is labeled as 1,2 ... 10, accumulative graphic information difference value corresponding to this time period and accumulative audio-frequency information difference value is marked respectively above each time period of key frame data and audio frequency data, accumulative graphic information difference value such as time period 1 correspondence of key frame data in Fig. 6 is 50, and the accumulative audio-frequency information difference value of time period 1 correspondence of audio frequency data is 30 etc. Setting the 2nd difference threshold value in advance is 50, if the accumulative graphic information difference value of a certain unit time section or accumulative audio-frequency information difference value are greater than default 2nd difference threshold value 50, then identify that this video segment corresponding to unit time section is excellent video in video file. In the present embodiment, accumulative graphic information difference value or accumulative audio-frequency information difference value are greater than the free section 1 of default 2nd difference threshold value 50, time period 4, time period 5, time period 8 and time period 10, then using time period 1, time period 4, time period 5, time period 8 and time period 10 as the excellent time period, and in described video file extraction time section 1, the time period 4, the time period 5,5 video segments corresponding to time period 8 and time period 10,5 video segments extracted are carried out coding and synthesize an excellent video segment.
As shown in Figure 7, second embodiment of the invention proposes a kind of method of display video file, on the basis of above-described embodiment, also comprises after above-mentioned steps S30:
Step S40, when receiving the click commands of the described excellent video segment play, plays described video file.
In the present embodiment when utilizing the described excellent video segment play to show described video file, if receiving the click commands to the described excellent video segment play, then judge that user needs all videos content watching described video file, then play described video file, so that user can select the need of this video file of viewing according to the excellent video segment play during this video file of display, and when user needs to watch this video file, all videos content of described video file can be play.
It should be noted that, herein, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, so that comprise the process of a series of key element, method, article or device not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise the key element intrinsic for this kind of process, method, article or device. When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
Above-mentioned embodiment of the present invention sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, the technician of this area can be well understood to above-described embodiment method and can realize by the mode that software adds required general hardware platform, hardware can certainly be passed through, but in a lot of situation, the former is better enforcement mode. Based on such understanding, the technical scheme of the present invention in essence or says that part prior art contributed can embody with the form of software product, this computer software product is stored in a storage media (such as ROM/RAM, magnetic disc, CD), comprise some instructions with so that a station terminal equipment (can be mobile phone, computer, server, conditioner, or the network equipment etc.) perform the method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the patent scope of the present invention is limited; every utilize specification sheets of the present invention and accompanying drawing content to do equivalent structure or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, all it is included in the scope of patent protection of the present invention with reason.

Claims (10)

1. the device of a display video file, it is characterised in that, the device of described display video file comprises:
Acquisition module, for when reading video file, decoding described video file, obtain the key frame data of described video file;
Extraction module, for based on described key frame data, extracting the excellent video segment in described video file by preset rules;
Display module, for when receiving the display instruction of described video file, showing the described excellent video segment play.
2. the device of display video file as claimed in claim 1, it is characterised in that, described extraction module also for:
Generate the histogram of each key frame data; Calculate the graphic information difference value between the histogram of adjacent key frame data; Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset; The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
3. the device of display video file as claimed in claim 2, it is characterised in that, described acquisition module also for:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Described extraction module also for:
Generate the oscillogram that every second audio frequency data are corresponding; Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data; Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset; Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
4. the device of display video file as claimed in claim 1, it is characterised in that, described display module also for:
Set up the mapping relation between described excellent video segment and described video file; When receiving the display instruction of described video file, transfer described excellent video segment according to described mapping relation, and play described excellent video segment to show described video file.
5. the device of display video file as claimed in claim 1, it is characterised in that, also comprise:
Playing module, for when receiving the click commands of the described excellent video segment play, playing described video file.
6. the method for a display video file, it is characterised in that, the method for described display video file comprises the following steps:
When reading video file, described video file is decoded, obtains the key frame data of described video file;
Based on described key frame data, extract the excellent video segment in described video file by preset rules;
When receiving the display instruction of described video file, the described excellent video segment that display is play.
7. the method for display video file as claimed in claim 6, it is characterised in that, described based on described key frame data, the step extracting the excellent video segment in described video file by preset rules comprises:
Generate the histogram of each key frame data;
Calculate the graphic information difference value between the histogram of adjacent key frame data;
Obtain the accumulative graphic information difference value between the histogram of each adjacent key frame data in the unit time section preset;
The accumulative graphic information difference value of acquisition is greater than the unit time section of default first difference threshold value as the excellent time period, extracts excellent video segment corresponding to the time period described in described video file, the video segment of extraction synthesizes excellent video segment.
8. the method for display video file as claimed in claim 7, it is characterised in that, described when reading video file, described video file is decoded, the step of the key frame data obtaining described video file also comprises:
When reading video file, described video file is decoded, obtains the audio frequency data of described video file;
Described based on described key frame data, the step extracting the excellent video segment in described video file by preset rules also comprises:
Generate the oscillogram that every second audio frequency data are corresponding;
Calculate the audio-frequency information difference value between the oscillogram of adjacent tone audio data;
Obtain the accumulative audio-frequency information difference value between the oscillogram of each adjacent tone audio data in the unit time section preset;
Obtain described accumulative graphic information difference value and/or unit time section that described accumulative audio-frequency information difference value is greater than default 2nd difference threshold value as the excellent time period, extract described in described video file video segment corresponding to excellent time period, the video segment of extraction is synthesized excellent video segment.
9. the method for display video file as claimed in claim 6, it is characterised in that, described when receiving the display instruction of described video file, the step of the described excellent video segment that display is play comprises:
Set up the mapping relation between described excellent video segment and described video file;
When receiving the display instruction of described video file, transfer described excellent video segment according to described mapping relation, and play described excellent video segment to show described video file.
10. the method for display video file as claimed in claim 6, it is characterised in that, described when receiving the display instruction of described video file, also comprise after the step of the described excellent video segment that display is play:
When receiving the click commands of the described excellent video segment play, play described video file.
CN201610004162.1A 2016-01-04 2016-01-04 Device and method for displaying video file Pending CN105681894A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610004162.1A CN105681894A (en) 2016-01-04 2016-01-04 Device and method for displaying video file
PCT/CN2016/113751 WO2017118353A1 (en) 2016-01-04 2016-12-30 Device and method for displaying video file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610004162.1A CN105681894A (en) 2016-01-04 2016-01-04 Device and method for displaying video file

Publications (1)

Publication Number Publication Date
CN105681894A true CN105681894A (en) 2016-06-15

Family

ID=56298917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610004162.1A Pending CN105681894A (en) 2016-01-04 2016-01-04 Device and method for displaying video file

Country Status (2)

Country Link
CN (1) CN105681894A (en)
WO (1) WO2017118353A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106792272A (en) * 2016-11-28 2017-05-31 维沃移动通信有限公司 The generation method and mobile terminal of a kind of video thumbnails
WO2017118353A1 (en) * 2016-01-04 2017-07-13 努比亚技术有限公司 Device and method for displaying video file
CN110798735A (en) * 2019-08-28 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN111246244A (en) * 2020-02-04 2020-06-05 北京贝思科技术有限公司 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment
CN113225596A (en) * 2021-04-28 2021-08-06 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113382287A (en) * 2020-03-09 2021-09-10 阿里巴巴集团控股有限公司 Media file playing method, device and system
CN113691864A (en) * 2021-07-13 2021-11-23 北京百度网讯科技有限公司 Video clipping method, video clipping device, electronic equipment and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347941B (en) * 2020-11-09 2021-06-08 南京紫金体育产业股份有限公司 Motion video collection intelligent generation and distribution method based on 5G MEC
CN115052188A (en) * 2022-05-09 2022-09-13 北京有竹居网络技术有限公司 Video editing method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090083812A1 (en) * 2007-01-19 2009-03-26 Beijing Funshion Online Technologies Ltd. Method and apparatus for controlling on-demand play of media files based on P2P protocols
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights
CN102196001A (en) * 2010-03-15 2011-09-21 腾讯科技(深圳)有限公司 Movie file downloading device and method
CN102316370A (en) * 2010-06-29 2012-01-11 腾讯科技(深圳)有限公司 Method and device for displaying playback information
CN104780417A (en) * 2015-03-20 2015-07-15 广东欧珀移动通信有限公司 Display method for preview video file, mobile terminal and system
CN104951742A (en) * 2015-03-02 2015-09-30 北京奇艺世纪科技有限公司 Detection method and system for sensitive video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012138804A (en) * 2010-12-27 2012-07-19 Sony Corp Image processor, image processing method, and program
CN105681894A (en) * 2016-01-04 2016-06-15 努比亚技术有限公司 Device and method for displaying video file

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090083812A1 (en) * 2007-01-19 2009-03-26 Beijing Funshion Online Technologies Ltd. Method and apparatus for controlling on-demand play of media files based on P2P protocols
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights
CN102196001A (en) * 2010-03-15 2011-09-21 腾讯科技(深圳)有限公司 Movie file downloading device and method
CN102316370A (en) * 2010-06-29 2012-01-11 腾讯科技(深圳)有限公司 Method and device for displaying playback information
CN104951742A (en) * 2015-03-02 2015-09-30 北京奇艺世纪科技有限公司 Detection method and system for sensitive video
CN104780417A (en) * 2015-03-20 2015-07-15 广东欧珀移动通信有限公司 Display method for preview video file, mobile terminal and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017118353A1 (en) * 2016-01-04 2017-07-13 努比亚技术有限公司 Device and method for displaying video file
CN106792272A (en) * 2016-11-28 2017-05-31 维沃移动通信有限公司 The generation method and mobile terminal of a kind of video thumbnails
CN110798735A (en) * 2019-08-28 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN110798735B (en) * 2019-08-28 2022-11-18 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN111246244A (en) * 2020-02-04 2020-06-05 北京贝思科技术有限公司 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment
CN113382287A (en) * 2020-03-09 2021-09-10 阿里巴巴集团控股有限公司 Media file playing method, device and system
CN113225596A (en) * 2021-04-28 2021-08-06 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113225596B (en) * 2021-04-28 2022-11-01 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113691864A (en) * 2021-07-13 2021-11-23 北京百度网讯科技有限公司 Video clipping method, video clipping device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2017118353A1 (en) 2017-07-13

Similar Documents

Publication Publication Date Title
CN105681894A (en) Device and method for displaying video file
CN105430295B (en) Image processing apparatus and method
CN105100481A (en) Shooting method and apparatus, and mobile terminal
CN105578056A (en) Photographing terminal and method
CN105096241A (en) Face image beautifying device and method
CN105245777A (en) Method and device for generating video image
CN105262951A (en) Mobile terminal having binocular camera and photographing method
CN105100609A (en) Mobile terminal and shooting parameter adjusting method
CN104902185B (en) Image pickup method and device
CN104683697A (en) Shooting parameter adjusting method and device
CN104767941A (en) Photography method and device
CN105827961A (en) Mobile terminal and focusing method
CN104639837A (en) Method and device for setting shooting parameters
CN105679300A (en) Mobile terminal and noise reduction method
CN104660903A (en) Shooting method and shooting device
CN105635452A (en) Mobile terminal and contact person identification method thereof
CN105335458A (en) Picture previewing method and apparatus
CN104735545A (en) Audio/video file playing method and system
CN105163035A (en) Mobile terminal shooting system and mobile terminal shooting method
CN104822099A (en) Video packaging method and mobile terminal
CN104933102A (en) Picturing storage method and device
CN105120164A (en) Device and method for processing continuously-shot pictures
CN105187724A (en) Mobile terminal and method for processing images
CN104751488A (en) Photographing method for moving track of moving object and terminal equipment
CN104796625A (en) Picture synthesizing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160615