CN105100892A - Video playing device and method - Google Patents

Video playing device and method Download PDF

Info

Publication number
CN105100892A
CN105100892A CN201510452582.1A CN201510452582A CN105100892A CN 105100892 A CN105100892 A CN 105100892A CN 201510452582 A CN201510452582 A CN 201510452582A CN 105100892 A CN105100892 A CN 105100892A
Authority
CN
China
Prior art keywords
video
personage
played
label
character recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510452582.1A
Other languages
Chinese (zh)
Other versions
CN105100892B (en
Inventor
陈理锐
沈闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510452582.1A priority Critical patent/CN105100892B/en
Publication of CN105100892A publication Critical patent/CN105100892A/en
Application granted granted Critical
Publication of CN105100892B publication Critical patent/CN105100892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention discloses a video playing device, and the device comprises a receiving module which is used for determining a to-be-played video and a to-be-played figure when a playing instruction is received; an extracting module which is used for the feature recognition of the video and extracting a video clip containing a figure identification corresponding to the to-be-played figure; and a playing module which is used for the new combination and playing of the extracting clip. The invention also provides a video playing method. The device and method automatically screen related video clips for playing during video playing, saves the time of a user, and improves the user experience.

Description

Video play device and method
Technical field
The present invention relates to video playback field, particularly relate to a kind of video play device and method.
Background technology
In films and television programs, a film or TV play have its leading role or high priest, and plot is all launch around leading role and undertaken annotating by the performance of leading role personage, and therefore, the time that in video, leading role occurs is only the most excellent part of video.Simultaneously, user usually needs the video content browsing more excellences at short notice in daily life, and mostly traditional video broadcasting method is to adopt the mode of F.F. or by word advance notice or understand video content by modes such as the such as trailers that makes in advance fast for user.But only the fast-forward play of machinery often misses the Highlight in many videos, and for user, the word advance notice made in advance or advance notice short-movie also cannot selectively check that user wants the content seen.Therefore, existing video broadcasting method cannot the Automatic sieve featured videos fragment selected in video supply user selectively to appreciate, and to save the time cost of user, improves the problem of Consumer's Experience.
Foregoing, only for auxiliary understanding technical scheme of the present invention, does not represent and admits that foregoing is prior art.
Summary of the invention
Main purpose of the present invention is that solving existing video broadcasting method cannot the Automatic sieve featured videos fragment selected in video supply user selectively to appreciate, and to save the time cost of user, improves the problem of Consumer's Experience.
For achieving the above object, a kind of video play device provided by the invention, described video play device comprises:
Receiver module, for when receiving play instruction, determines video to be played and personage to be played;
Extraction module, for carrying out feature identification to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Playing module, for being undertaken reconfiguring and playing by extracted video segment.
Preferably, described extraction module comprises image extraction unit, face identification unit and assembled unit; Described image extraction unit, for extracting the picture frame of described video from described video; Described face identification unit, for carrying out recognition of face to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Described assembled unit, for being combined as video segment by obtained picture frame.
Preferably, described extraction module also comprises audio extraction unit and Application on Voiceprint Recognition unit;
Described audio extraction unit, for extracting the audio file of described video from described video;
Described Application on Voiceprint Recognition unit, for carrying out Application on Voiceprint Recognition to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Described assembled unit, also for extracting video segment corresponding to described audio fragment from described video.
Preferably, described receiver module comprises receiving element, determining unit and selected cell;
Described receiving element, for when receiving play instruction, determines video to be played according to described play instruction;
Described determining unit, for determining the character recognition and label of video personage and the correspondence occurred in described video;
Described selected cell, for providing the video personage selecting to occur in video described in interface display, selects personage to be played for user based on described selection interface;
Described receiving element, also for when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determines personage to be played and character recognition and label corresponding to described personage to be played.
Preferably, described receiver module also comprises matching unit;
Described determining unit, also for carrying out feature identification to described video, determines the character recognition and label occurred in described video;
Described matching unit, for being mated with the figure database of preservation by determined character recognition and label, determines the video personage that described character recognition and label is corresponding.
In addition, for achieving the above object, the present invention also provides a kind of video broadcasting method, and described video broadcasting method comprises the following steps:
When receiving play instruction, determine video to be played and personage to be played;
Feature identification is carried out to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Extracted video segment is carried out reconfiguring and playing.
Preferably, described character recognition and label is facial image, describedly carries out feature identification to described video, and the step extracting the video segment containing character recognition and label corresponding to described personage to be played comprises:
The picture frame of described video is extracted from described video;
Recognition of face is carried out to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Obtained picture frame is combined as video segment.
Preferably, described character recognition and label is vocal print feature, describedly carries out feature identification to described video, and the step extracting the video segment containing character recognition and label corresponding to described personage to be played comprises:
The audio file of described video is extracted from described video;
Application on Voiceprint Recognition is carried out to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Video segment corresponding to described audio fragment is extracted from described video.
Preferably, described when receiving play instruction, determine that the step of video to be played and personage to be played comprises:
When receiving play instruction, determine video to be played according to described play instruction;
Determine the character recognition and label of video personage and the correspondence occurred in described video;
The video personage selecting to occur in video described in interface display is provided, selects personage to be played for user based on described selection interface;
When receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played and character recognition and label corresponding to described personage to be played.
Preferably, described determine to occur in described video video personage and the step of character recognition and label of correspondence comprise:
Feature identification is carried out to described video, determines the character recognition and label occurred in described video;
Determined character recognition and label is mated with the figure database of preservation, determines the video personage that described character recognition and label is corresponding.
The present invention is according to user instruction, by carrying out feature identification to video, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the high-level schematic functional block diagram of the first embodiment of video play device of the present invention;
Fig. 4 is the high-level schematic functional block diagram of the second embodiment of video play device of the present invention;
Fig. 5 is the high-level schematic functional block diagram of the 3rd embodiment of video play device of the present invention;
Fig. 6 is the high-level schematic functional block diagram of the 4th embodiment of video play device of the present invention;
Fig. 7 is the schematic flow sheet of the first embodiment of video broadcasting method of the present invention;
Fig. 8 is the schematic flow sheet of the second embodiment of video broadcasting method of the present invention;
Fig. 9 is the schematic flow sheet of the 3rd embodiment of video broadcasting method of the present invention;
Figure 10 is the schematic flow sheet of the 4th embodiment of video broadcasting method of the present invention;
Figure 11 is the schematic flow sheet of the preferred embodiment of the character recognition and label step of the present invention video personage of determining to occur in described video and correspondence.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input unit 120, user input unit 130, sensing cell 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) the digit broadcasting system receiving digital broadcast of Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (global positioning system).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating location and temporal information uses three satellites and by the error of the position that uses an other satellite correction calculation to go out and temporal information.In addition, GPS module 115 can carry out computational speed information by Continuous plus current location information in real time.
A/V input unit 120 is for audio reception or vision signal.A/V input unit 120 can comprise camera 121 and microphone 122, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 121 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of audio signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power subsystem 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 141 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input unit and output device.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin-film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loud speaker, buzzer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 181 for reproducing (or playback) multi-medium data, and multi-media module 181 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280MSC280 and be constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC275.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several global positioning system (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but be understandable that, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of video play device of the present invention is proposed.
With reference to the high-level schematic functional block diagram that Fig. 3, Fig. 3 are the first embodiment of video play device of the present invention.
In the present embodiment, described video play device comprises: receiver module 10, extraction module 20 and playing module 30;
Described receiver module 10, for when receiving play instruction, determines video to be played and personage to be played;
When the video playback instruction receiving user, determine the video personage to be played of the video that described play instruction is corresponding and correspondence.To play the video segment that corresponding video personage is correlated with according to the play instruction of user.
By providing video playback interface, and described play instruction can be triggered for user based on described video playback interface by the video play lists of display; Or, also by the physical button for displaying video provided, described play instruction can be triggered for user based on described physical button; Or, also by being provided for the shortcut icon of displaying video, described play instruction can be triggered for user based on described shortcut icon.
Described personage to be played is the video personage occurred in described video to be played.Can by providing the video personage selecting to occur in video described in interface display, personage to be played is selected based on described selection interface for user, when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played.Preferably, can by the facial image providing the video personage that selects to occur in video described in interface display corresponding, personage to be played is selected based on shown facial image for user, when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played.
One or more video personage that described personage to be played can occur in described video to be played, namely can according to the play instruction of user, plays the video segment that a video personage occurring in described video to be played is correlated with; Also can according to the play instruction of user, play in described video to be played the video segment that multiple video personages of occurring are correlated with simultaneously.
Described extraction module 20, for carrying out feature identification to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Character features identification is carried out to described video, in described video, extracts the video segment containing character recognition and label corresponding to described personage to be played, the video segment that the personage to be played selected to extract user is correlated with.Described character recognition and label can be the face characteristic of described video personage or the vocal print feature of described video personage.
Described video is carried out to the process of feature identification.Can be that recognition of face is carried out to the picture frame of described video, and according to the facial image feature of personage to be played, identify the video segment containing described personage to be played in video, or, also can be that Application on Voiceprint Recognition is carried out to the audio file of described video, and according to the vocal print feature of personage to be played, identify the audio fragment of vocal print feature corresponding containing described personage to be played in video, and according to the corresponding relation of audio file and described video, extract the video segment containing described personage to be played, or, also first recognition of face can be carried out to the picture frame of described video, and according to the facial image feature of personage to be played, identify the first video segment containing described personage to be played in video, again Application on Voiceprint Recognition is carried out to the audio file of described video, and according to the vocal print feature of personage to be played, identify the audio fragment of vocal print feature corresponding containing described personage to be played in video, and according to the corresponding relation of audio file and described video, extract the second video segment containing described personage to be played, the video segment containing described personage to be played is arranged out according to described first video segment and described second video segment.
Described playing module 30, for being undertaken reconfiguring and playing by extracted video segment.
Extracted video segment is reconfigured according to time sequencing and plays.Preferably, if extract the first video segment containing described personage to be played according to recognition of face, the second video segment containing described personage to be played is extracted according to Application on Voiceprint Recognition, then delete the part of identical repetition in described first video segment and described second video segment, remaining video segment is exported broadcasting in chronological order again.
The present embodiment is according to the play instruction of user, by carrying out feature identification to video, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the high-level schematic functional block diagram that Fig. 4, Fig. 4 are the second embodiment of video play device of the present invention.Based on the first embodiment of above-mentioned video play device, described extraction module 20 comprises image extraction unit 21, face identification unit 22 and assembled unit 23;
Described image extraction unit 21, for extracting the picture frame of described video from described video;
Described face identification unit 22, for carrying out recognition of face to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Described assembled unit 23, for being combined as video segment by obtained picture frame.
The picture frame of described video is extracted from described video; Recognition of face is carried out to described picture frame, determines the picture frame of facial image corresponding containing described personage to be played in described video; Picture frame containing facial image corresponding to described personage to be played is combined as video segment.
Preferably, the process of carrying out recognition of face can be passed through such as based on the recognizer (Feature-basedrecognitionalgorithms) of human face characteristic point, based on the recognizer (Appearance-basedrecognitionalgorithms) of view picture facial image, recognition of face is carried out based on one or more in recognizer (Template-basedrecognitionalgorithms) and the algorithm (Recognitionalgorithmsusingneuralnetwork) that utilizes neural net to carry out identifying etc. the face recognition algorithms of template.
Preferably, carry out in the process of recognition of face, scene in video can be identified, to improve the efficiency of Same Scene human face identification, such as, can by obtaining the histogrammic difference of entirety of two images of interval frame, the histogrammic difference of entirety of two of interval frame images and the discrepancy threshold preset are compared, when the histogrammic difference of entirety of two images of interval frame exceedes default discrepancy threshold, determine that video scene changes, video scene is not same video scene; When the histogrammic difference of entirety of two images of interval frame does not exceed default discrepancy threshold, determine that video scene does not change, video scene is same video scene.
The present embodiment carries out recognition of face by the picture frame treating displaying video, the video segment that the personage to be played having extracted user's selection is correlated with, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the high-level schematic functional block diagram that Fig. 5, Fig. 5 are the 3rd embodiment of video play device of the present invention.Based on the second embodiment of above-mentioned video play device, described extraction module also comprises audio extraction unit 24 and Application on Voiceprint Recognition unit 25;
Described audio extraction unit 24, for extracting the audio file of described video from described video;
Described Application on Voiceprint Recognition unit 25, for carrying out Application on Voiceprint Recognition to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Described assembled unit 23, also for extracting video segment corresponding to described audio fragment from described video.
The audio file of described video is extracted from described video; Application on Voiceprint Recognition is carried out to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played; Video segment corresponding to described audio fragment is extracted from described video.
Preferably, described audio file is carried out to the process of Application on Voiceprint Recognition, by the voice of each speaker to be identified are regarded as a signal source, can characterize with a code book; From voice to be identified, extract one group of vector, and successively vector quantization process is carried out to this group vector, obtain the feature vector sequence of speaker, the language of Application on Voiceprint Recognition efficiency and precision and speaker and text size are had nothing to do; Feature vector sequence for different speakers sets up independently vector model, makes the phonetic feature of each speaker in feature space, all form its specific feature clustering center; The audio fragment with similar vocal print feature is sorted out mark, and therefrom extracts the audio fragment containing vocal print feature corresponding to described personage to be played.
The present embodiment carries out Application on Voiceprint Recognition by the picture frame treating displaying video, the video segment that the personage to be played having extracted user's selection is correlated with, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the high-level schematic functional block diagram that Fig. 6, Fig. 6 are the 4th embodiment of video play device of the present invention.Based on the first embodiment of above-mentioned video play device, described receiver module 10 comprises receiving element 11, determining unit 12, selected cell 13 and matching unit 14;
Described receiving element 11, for when receiving play instruction, determines video to be played according to described play instruction;
Described determining unit 12, for determining the character recognition and label of video personage and the correspondence occurred in described video;
When receiving play instruction, determine video to be played according to described play instruction; Feature identification is carried out to described video, to determine the character recognition and label of video personage and the correspondence occurred in described video.The process receiving the play instruction of user is identical with the process in an above-mentioned embodiment, does not repeat them here.
Described selected cell 13, for providing the video personage selecting to occur in video described in interface display, selects personage to be played for user based on described selection interface;
Described receiving element 11, also for when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determines personage to be played and character recognition and label corresponding to described personage to be played.
The video personage selecting to occur in video described in interface display is provided, selects personage to be played for user based on described selection interface; When receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played and character recognition and label corresponding to described personage to be played.
Preferably, the facial image selecting the video personage occurred in video described in interface display can be provided, personage to be played is selected based on shown facial image for user, further, can in information such as the names of video personage described in described selection interface display as information labels, the information labels such as the name of described video personage can be obtained by inquiry figure database, or also can be obtained by the database of other server of internet checking.
Further, described determining unit 12, also for carrying out feature identification to described video, determines the character recognition and label occurred in described video;
Described matching unit 14, for being mated with the figure database of preservation by determined character recognition and label, determines the video personage that described character recognition and label is corresponding.
Feature identification is carried out to described video, determines the character recognition and label occurred in described video; Determined character recognition and label is mated with the figure database of preservation, determines the video personage that described character recognition and label is corresponding.Preferably, recognition of face can be carried out to described video, determine the facial image occurred in described video, determined facial image is mated with the figure database of preservation, determine the video personage that described facial image is corresponding; Or, also the character data that determined facial image and the Internet obtain can be mated, determines the video personage that described facial image is corresponding.
The present embodiment is by carrying out feature identification to video, and according to the video personage occurred in the figure database determination video preserved, the video personage occurred in described video is supplied to user, video personage to be played is selected for user, and according to the play instruction of user, the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
The present invention further provides a kind of video broadcasting method.
With reference to the schematic flow sheet that Fig. 7, Fig. 7 are the first embodiment of video broadcasting method of the present invention.
In the present embodiment, described video broadcasting method comprises the following steps:
Step S10, when receiving play instruction, determines video to be played and personage to be played;
When the video playback instruction receiving user, determine the video personage to be played of the video that described play instruction is corresponding and correspondence.To play the video segment that corresponding video personage is correlated with according to the play instruction of user.
By providing video playback interface, and described play instruction can be triggered for user based on described video playback interface by the video play lists of display; Or, also by the physical button for displaying video provided, described play instruction can be triggered for user based on described physical button; Or, also by being provided for the shortcut icon of displaying video, described play instruction can be triggered for user based on described shortcut icon.
Described personage to be played is the video personage occurred in described video to be played.Can by providing the video personage selecting to occur in video described in interface display, personage to be played is selected based on described selection interface for user, when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played.Preferably, can by the facial image providing the video personage that selects to occur in video described in interface display corresponding, personage to be played is selected based on shown facial image for user, when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played.
One or more video personage that described personage to be played can occur in described video to be played, namely can according to the play instruction of user, plays the video segment that a video personage occurring in described video to be played is correlated with; Also can according to the play instruction of user, play in described video to be played the video segment that multiple video personages of occurring are correlated with simultaneously.
Step S20, carries out feature identification to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Character features identification is carried out to described video, in described video, extracts the video segment containing character recognition and label corresponding to described personage to be played, the video segment that the personage to be played selected to extract user is correlated with.Described character recognition and label can be the face characteristic of described video personage or the vocal print feature of described video personage.
Described video is carried out to the process of feature identification.Can be that recognition of face is carried out to the picture frame of described video, and according to the facial image feature of personage to be played, identify the video segment containing described personage to be played in video, or, also can be that Application on Voiceprint Recognition is carried out to the audio file of described video, and according to the vocal print feature of personage to be played, identify the audio fragment of vocal print feature corresponding containing described personage to be played in video, and according to the corresponding relation of audio file and described video, extract the video segment containing described personage to be played, or, also first recognition of face can be carried out to the picture frame of described video, and according to the facial image feature of personage to be played, identify the first video segment containing described personage to be played in video, again Application on Voiceprint Recognition is carried out to the audio file of described video, and according to the vocal print feature of personage to be played, identify the audio fragment of vocal print feature corresponding containing described personage to be played in video, and according to the corresponding relation of audio file and described video, extract the second video segment containing described personage to be played, the video segment containing described personage to be played is arranged out according to described first video segment and described second video segment.
Step S30, is undertaken reconfiguring and playing by extracted video segment.
Extracted video segment is reconfigured according to time sequencing and plays.Preferably, if extract the first video segment containing described personage to be played according to recognition of face, the second video segment containing described personage to be played is extracted according to Application on Voiceprint Recognition, then delete the part of identical repetition in described first video segment and described second video segment, remaining video segment is exported broadcasting in chronological order again.
The present embodiment is according to the play instruction of user, by carrying out feature identification to video, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the schematic flow sheet that Fig. 8, Fig. 8 are the second embodiment of video broadcasting method of the present invention.Based on the first embodiment of above-mentioned video broadcasting method, described character recognition and label is facial image, and described step S20 also comprises:
Step S210, extracts the picture frame of described video from described video;
Step S211, carries out recognition of face to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Step S212, is combined as video segment by obtained picture frame.
The picture frame of described video is extracted from described video; Recognition of face is carried out to described picture frame, determines the picture frame of facial image corresponding containing described personage to be played in described video; Picture frame containing facial image corresponding to described personage to be played is combined as video segment.
Preferably, the process of carrying out recognition of face can be passed through such as based on the recognizer (Feature-basedrecognitionalgorithms) of human face characteristic point, based on the recognizer (Appearance-basedrecognitionalgorithms) of view picture facial image, recognition of face is carried out based on one or more in recognizer (Template-basedrecognitionalgorithms) and the algorithm (Recognitionalgorithmsusingneuralnetwork) that utilizes neural net to carry out identifying etc. the face recognition algorithms of template.
Preferably, carry out in the process of recognition of face, scene in video can be identified, to improve the efficiency of Same Scene human face identification, such as, can by obtaining the histogrammic difference of entirety of two images of interval frame, the histogrammic difference of entirety of two of interval frame images and the discrepancy threshold preset are compared, when the histogrammic difference of entirety of two images of interval frame exceedes default discrepancy threshold, determine that video scene changes, video scene is not same video scene; When the histogrammic difference of entirety of two images of interval frame does not exceed default discrepancy threshold, determine that video scene does not change, video scene is same video scene.
The present embodiment carries out recognition of face by the picture frame treating displaying video, the video segment that the personage to be played having extracted user's selection is correlated with, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the schematic flow sheet that Fig. 9, Fig. 9 are the 3rd embodiment of video broadcasting method of the present invention.Based on the first embodiment of above-mentioned video broadcasting method, described character recognition and label is vocal print feature, and described step S20 also comprises:
Step S220, extracts the audio file of described video from described video;
Step S221, carries out Application on Voiceprint Recognition to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Step S222, extracts video segment corresponding to described audio fragment from described video.
The audio file of described video is extracted from described video; Application on Voiceprint Recognition is carried out to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played; Video segment corresponding to described audio fragment is extracted from described video.
Preferably, described audio file is carried out to the process of Application on Voiceprint Recognition, by the voice of each speaker to be identified are regarded as a signal source, can characterize with a code book; From voice to be identified, extract one group of vector, and successively vector quantization process is carried out to this group vector, obtain the feature vector sequence of speaker, the language of Application on Voiceprint Recognition efficiency and precision and speaker and text size are had nothing to do; Feature vector sequence for different speakers sets up independently vector model, makes the phonetic feature of each speaker in feature space, all form its specific feature clustering center; The audio fragment with similar vocal print feature is sorted out mark, and therefrom extracts the audio fragment containing vocal print feature corresponding to described personage to be played.
The present embodiment carries out Application on Voiceprint Recognition by the picture frame treating displaying video, the video segment that the personage to be played having extracted user's selection is correlated with, and then the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
With reference to the schematic flow sheet that Figure 10, Figure 10 are the 4th embodiment of video broadcasting method of the present invention.Based on the first embodiment of above-mentioned video broadcasting method, described step S10 also comprises:
Step S11, when receiving play instruction, determines video to be played according to described play instruction;
Step S12, determines the character recognition and label of video personage and the correspondence occurred in described video;
When receiving play instruction, determine video to be played according to described play instruction; Feature identification is carried out to described video, to determine the character recognition and label of video personage and the correspondence occurred in described video.The process receiving the play instruction of user is identical with the process in an above-mentioned embodiment, does not repeat them here.
It is the schematic flow sheet of the preferred embodiment of the step of the character recognition and label of the present invention video personage of determining to occur in described video and correspondence with reference to Figure 11, Figure 11;
Step S120, carries out feature identification to described video, determines the character recognition and label occurred in described video;
Step S121, mates determined character recognition and label with the figure database of preservation, determines the video personage that described character recognition and label is corresponding.
Feature identification is carried out to described video, determines the character recognition and label occurred in described video; Determined character recognition and label is mated with the figure database of preservation, determines the video personage that described character recognition and label is corresponding.Preferably, recognition of face can be carried out to described video, determine the facial image occurred in described video, determined facial image is mated with the figure database of preservation, determine the video personage that described facial image is corresponding; Or, also the character data that determined facial image and the Internet obtain can be mated, determines the video personage that described facial image is corresponding.
Step S13, provides the video personage selecting to occur in video described in interface display, selects personage to be played for user based on described selection interface;
Step S14, when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determines personage to be played and character recognition and label corresponding to described personage to be played.
The video personage selecting to occur in video described in interface display is provided, selects personage to be played for user based on described selection interface; When receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played and character recognition and label corresponding to described personage to be played.
Preferably, the facial image selecting the video personage occurred in video described in interface display can be provided, personage to be played is selected based on shown facial image for user, further, can in information such as the names of video personage described in described selection interface display as information labels, the information labels such as the name of described video personage can be obtained by inquiry figure database, or also can be obtained by the database of other server of internet checking.
The present embodiment is by carrying out feature identification to video, and according to the video personage occurred in the figure database determination video preserved, the video personage occurred in described video is supplied to user, video personage to be played is selected for user, and according to the play instruction of user, the video segment that the video personage that optionally broadcasting user specifies is correlated with, achieve when video is broadcast, the video segment relevant according to the selection automatic screening of user is play, save the time cost of user, improve Consumer's Experience.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a video play device, is characterized in that, described video play device comprises:
Receiver module, for when receiving play instruction, determines video to be played and personage to be played;
Extraction module, for carrying out feature identification to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Playing module, for being undertaken reconfiguring and playing by extracted video segment.
2. video play device as claimed in claim 1, it is characterized in that, described extraction module comprises image extraction unit, face identification unit and assembled unit;
Described image extraction unit, for extracting the picture frame of described video from described video;
Described face identification unit, for carrying out recognition of face to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Described assembled unit, for being combined as video segment by obtained picture frame.
3. video play device as claimed in claim 2, it is characterized in that, described extraction module also comprises audio extraction unit and Application on Voiceprint Recognition unit;
Described audio extraction unit, for extracting the audio file of described video from described video;
Described Application on Voiceprint Recognition unit, for carrying out Application on Voiceprint Recognition to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Described assembled unit, also for extracting video segment corresponding to described audio fragment from described video.
4. the video play device as described in any one of claims 1 to 3, is characterized in that, described receiver module comprises receiving element, determining unit and selected cell;
Described receiving element, for when receiving play instruction, determines video to be played according to described play instruction;
Described determining unit, for determining the character recognition and label of video personage and the correspondence occurred in described video;
Described selected cell, for providing the video personage selecting to occur in video described in interface display, selects personage to be played for user based on described selection interface;
Described receiving element, also for when receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determines personage to be played and character recognition and label corresponding to described personage to be played.
5. video play device as claimed in claim 4, it is characterized in that, described receiver module also comprises matching unit;
Described determining unit, also for carrying out feature identification to described video, determines the character recognition and label occurred in described video;
Described matching unit, for being mated with the figure database of preservation by determined character recognition and label, determines the video personage that described character recognition and label is corresponding.
6. a video broadcasting method, is characterized in that, described video broadcasting method comprises the following steps:
When receiving play instruction, determine video to be played and personage to be played;
Feature identification is carried out to described video, and extracts the video segment containing character recognition and label corresponding to described personage to be played;
Extracted video segment is carried out reconfiguring and playing.
7. video broadcasting method as claimed in claim 6, it is characterized in that, described character recognition and label is facial image, describedly carries out feature identification to described video, and the step extracting the video segment containing character recognition and label corresponding to described personage to be played comprises:
The picture frame of described video is extracted from described video;
Recognition of face is carried out to described picture frame, obtains the picture frame containing facial image corresponding to described personage to be played;
Obtained picture frame is combined as video segment.
8. video broadcasting method as claimed in claim 6, it is characterized in that, described character recognition and label is vocal print feature, describedly carries out feature identification to described video, and the step extracting the video segment containing character recognition and label corresponding to described personage to be played comprises:
The audio file of described video is extracted from described video;
Application on Voiceprint Recognition is carried out to described audio file, extracts the audio fragment containing vocal print feature corresponding to described personage to be played;
Video segment corresponding to described audio fragment is extracted from described video.
9. the video broadcasting method as described in any one of claim 6 to 8, is characterized in that, described when receiving play instruction, determines that the step of video to be played and personage to be played comprises:
When receiving play instruction, determine video to be played according to described play instruction;
Determine the character recognition and label of video personage and the correspondence occurred in described video;
The video personage selecting to occur in video described in interface display is provided, selects personage to be played for user based on described selection interface;
When receiving user and selecting complete instruction based on the personage to be played that described selection interface is triggered, determine personage to be played and character recognition and label corresponding to described personage to be played.
10. video broadcasting method as claimed in claim 9, is characterized in that, described determine to occur in described video video personage and the step of character recognition and label of correspondence comprise:
Feature identification is carried out to described video, determines the character recognition and label occurred in described video;
Determined character recognition and label is mated with the figure database of preservation, determines the video personage that described character recognition and label is corresponding.
CN201510452582.1A 2015-07-28 2015-07-28 Video play device and method Active CN105100892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510452582.1A CN105100892B (en) 2015-07-28 2015-07-28 Video play device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510452582.1A CN105100892B (en) 2015-07-28 2015-07-28 Video play device and method

Publications (2)

Publication Number Publication Date
CN105100892A true CN105100892A (en) 2015-11-25
CN105100892B CN105100892B (en) 2018-05-15

Family

ID=54580281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510452582.1A Active CN105100892B (en) 2015-07-28 2015-07-28 Video play device and method

Country Status (1)

Country Link
CN (1) CN105100892B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933635A (en) * 2016-05-04 2016-09-07 王磊 Method for attaching label to audio and video content
CN106372607A (en) * 2016-09-05 2017-02-01 努比亚技术有限公司 Method for reading pictures from videos and mobile terminal
CN107743248A (en) * 2017-09-28 2018-02-27 北京奇艺世纪科技有限公司 A kind of video fast forward method and device
CN107820123A (en) * 2017-10-25 2018-03-20 深圳天珑无线科技有限公司 Method, mobile terminal and the storage device of mobile terminal screen printing picture
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
WO2018076998A1 (en) * 2016-10-28 2018-05-03 北京奇虎科技有限公司 Method and device for generating playback video file
CN108494666A (en) * 2018-04-01 2018-09-04 王勇 Internet chat tool
CN108829845A (en) * 2018-06-20 2018-11-16 北京奇艺世纪科技有限公司 A kind of audio file play method, device and electronic equipment
CN108924518A (en) * 2018-08-27 2018-11-30 深圳艺达文化传媒有限公司 Background synthetic method and Related product in promotion video
CN109982126A (en) * 2017-12-27 2019-07-05 艾迪普(北京)文化科技股份有限公司 A kind of stacking method of associated video
CN110035313A (en) * 2019-02-28 2019-07-19 阿里巴巴集团控股有限公司 Video playing control method, video playing control device, terminal device and electronic equipment
CN110225369A (en) * 2019-07-16 2019-09-10 百度在线网络技术(北京)有限公司 Video selection playback method, device, equipment and readable storage medium storing program for executing
CN110267114A (en) * 2019-07-04 2019-09-20 广州酷狗计算机科技有限公司 Playback method, device, terminal and the storage medium of video file
CN110582025A (en) * 2018-06-08 2019-12-17 北京百度网讯科技有限公司 Method and apparatus for processing video
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
WO2020135643A1 (en) * 2018-12-27 2020-07-02 深圳Tcl新技术有限公司 Target character video clip playback method, system and apparatus, and storage medium
CN111385641A (en) * 2018-12-29 2020-07-07 深圳Tcl新技术有限公司 Video processing method, smart television and storage medium
CN111683285A (en) * 2020-08-11 2020-09-18 腾讯科技(深圳)有限公司 File content identification method and device, computer equipment and storage medium
CN112153321A (en) * 2019-06-28 2020-12-29 华为技术有限公司 Conference recording method, device and system
CN112801004A (en) * 2021-02-05 2021-05-14 网易(杭州)网络有限公司 Method, device and equipment for screening video clips and storage medium
US11678029B2 (en) 2019-12-17 2023-06-13 Tencent Technology (Shenzhen) Company Limited Video labeling method and apparatus, device, and computer-readable storage medium
US11974067B2 (en) 2019-06-28 2024-04-30 Huawei Technologies Co., Ltd. Conference recording method and apparatus, and conference recording system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685574A (en) * 2011-03-09 2012-09-19 须泽中 System for automatically extracting images from digital television program and application thereof
CN103442252A (en) * 2013-08-21 2013-12-11 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video
CN103678308A (en) * 2012-09-03 2014-03-26 许丰 Intelligent navigation player
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685574A (en) * 2011-03-09 2012-09-19 须泽中 System for automatically extracting images from digital television program and application thereof
CN103678308A (en) * 2012-09-03 2014-03-26 许丰 Intelligent navigation player
CN103442252A (en) * 2013-08-21 2013-12-11 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933635A (en) * 2016-05-04 2016-09-07 王磊 Method for attaching label to audio and video content
CN106372607A (en) * 2016-09-05 2017-02-01 努比亚技术有限公司 Method for reading pictures from videos and mobile terminal
WO2018076998A1 (en) * 2016-10-28 2018-05-03 北京奇虎科技有限公司 Method and device for generating playback video file
CN107743248A (en) * 2017-09-28 2018-02-27 北京奇艺世纪科技有限公司 A kind of video fast forward method and device
CN107820123A (en) * 2017-10-25 2018-03-20 深圳天珑无线科技有限公司 Method, mobile terminal and the storage device of mobile terminal screen printing picture
CN107820138B (en) * 2017-11-06 2019-09-24 Oppo广东移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN109982126A (en) * 2017-12-27 2019-07-05 艾迪普(北京)文化科技股份有限公司 A kind of stacking method of associated video
CN108494666A (en) * 2018-04-01 2018-09-04 王勇 Internet chat tool
CN110582025B (en) * 2018-06-08 2022-04-01 北京百度网讯科技有限公司 Method and apparatus for processing video
CN110582025A (en) * 2018-06-08 2019-12-17 北京百度网讯科技有限公司 Method and apparatus for processing video
CN108829845A (en) * 2018-06-20 2018-11-16 北京奇艺世纪科技有限公司 A kind of audio file play method, device and electronic equipment
CN108924518A (en) * 2018-08-27 2018-11-30 深圳艺达文化传媒有限公司 Background synthetic method and Related product in promotion video
WO2020135643A1 (en) * 2018-12-27 2020-07-02 深圳Tcl新技术有限公司 Target character video clip playback method, system and apparatus, and storage medium
CN111385670A (en) * 2018-12-27 2020-07-07 深圳Tcl新技术有限公司 Target role video clip playing method, system, device and storage medium
US11580742B2 (en) 2018-12-27 2023-02-14 Shenzhen Tcl New Technology Co., Ltd. Target character video clip playing method, system and apparatus, and storage medium
CN111385641A (en) * 2018-12-29 2020-07-07 深圳Tcl新技术有限公司 Video processing method, smart television and storage medium
CN110035313A (en) * 2019-02-28 2019-07-19 阿里巴巴集团控股有限公司 Video playing control method, video playing control device, terminal device and electronic equipment
US11974067B2 (en) 2019-06-28 2024-04-30 Huawei Technologies Co., Ltd. Conference recording method and apparatus, and conference recording system
CN112153321A (en) * 2019-06-28 2020-12-29 华为技术有限公司 Conference recording method, device and system
CN112153321B (en) * 2019-06-28 2022-04-05 华为技术有限公司 Conference recording method, device and system
CN110267114A (en) * 2019-07-04 2019-09-20 广州酷狗计算机科技有限公司 Playback method, device, terminal and the storage medium of video file
CN110267114B (en) * 2019-07-04 2021-07-30 广州酷狗计算机科技有限公司 Video file playing method, device, terminal and storage medium
CN110225369A (en) * 2019-07-16 2019-09-10 百度在线网络技术(北京)有限公司 Video selection playback method, device, equipment and readable storage medium storing program for executing
CN110996138B (en) * 2019-12-17 2021-02-05 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
US11678029B2 (en) 2019-12-17 2023-06-13 Tencent Technology (Shenzhen) Company Limited Video labeling method and apparatus, device, and computer-readable storage medium
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
CN111683285A (en) * 2020-08-11 2020-09-18 腾讯科技(深圳)有限公司 File content identification method and device, computer equipment and storage medium
CN112801004A (en) * 2021-02-05 2021-05-14 网易(杭州)网络有限公司 Method, device and equipment for screening video clips and storage medium

Also Published As

Publication number Publication date
CN105100892B (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN105100892A (en) Video playing device and method
CN104917896A (en) Data pushing method and terminal equipment
CN105224925A (en) Video process apparatus, method and mobile terminal
CN105100491A (en) Device and method for processing photo
CN105260083A (en) Mobile terminal and method for realizing split screens
CN104766604A (en) Voice data marking method and device
CN104954867A (en) Media playing method and device
CN105718071A (en) Terminal and method for recommending associational words in input method
CN105099870A (en) Message pushing method and device
CN104967802A (en) Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas
CN104967744A (en) Method and device for adjusting terminal parameters
CN104809221A (en) Recommending method for music information and device
CN104731927A (en) Sound recording file classifying method and system
CN105049637A (en) Device and method for controlling instant communication
CN104968033A (en) Terminal network processing method and apparatus
CN104850799A (en) Mobile terminal and method of hiding data in mobile terminal
CN105262819A (en) Mobile terminal and method thereof for achieving push
CN104850325A (en) Mobile terminal application processing method and device
CN104679890A (en) Image pushing method and device
CN104980549A (en) Information processing method and mobile terminal
CN105100619A (en) Apparatus and method for adjusting shooting parameters
CN105245938A (en) Device and method for playing multimedia files
CN104661095A (en) Audio and video data recommendation method and system
CN105227829A (en) Preview picture device and its implementation
CN104967749A (en) Device and method for processing picture and text information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant