CN106792224A - A kind of terminal and video broadcasting method - Google Patents

A kind of terminal and video broadcasting method Download PDF

Info

Publication number
CN106792224A
CN106792224A CN201710004991.4A CN201710004991A CN106792224A CN 106792224 A CN106792224 A CN 106792224A CN 201710004991 A CN201710004991 A CN 201710004991A CN 106792224 A CN106792224 A CN 106792224A
Authority
CN
China
Prior art keywords
video
air
gesture
default
trigger action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710004991.4A
Other languages
Chinese (zh)
Other versions
CN106792224B (en
Inventor
谭湛江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Baixia High Tech Industrial Park Investment Development Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710004991.4A priority Critical patent/CN106792224B/en
Publication of CN106792224A publication Critical patent/CN106792224A/en
Application granted granted Critical
Publication of CN106792224B publication Critical patent/CN106792224B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention discloses a kind of terminal and video broadcasting method, the terminal includes:Detection module, display module and playing module.Detection module detects the gesture high up in the air in front of predeterminated position on Video Applications interface under Video Applications interface;When detection module determines that the gesture high up in the air for detecting matches with default first trigger action, the label of the first video that display module is once watched in predetermined position displaying;Wherein the first video is the last video not played;When detection module detects the second trigger action for the label, playing module commences play out the first video;Wherein, the start time point for commencing play out is time point when last stopping is played.By the scheme of the embodiment of the present invention, when user wants to continue to watch the video do not finished watching once, user's operation is reduced, improve the experience sense of user.

Description

A kind of terminal and video broadcasting method
Technical field
The present invention relates to terminal applies field, more particularly to a kind of terminal and video broadcasting method.
Background technology
At present, people watch the recreation that video has become generally existing by the Video Applications in terminal.However, Current video application has a problem that, i.e., after Video Applications are opened, if it is desired to continue to watch last not finishing watching Video need to reopen the video, and video progress is dragged into corresponding progress position could be in the upper base once watched Continue to watch the video on plinth.This mode causes that user is cumbersome, and experience sense is very poor.
The content of the invention
It is a primary object of the present invention to propose a kind of terminal and video broadcasting method, can want to continue to watch in user During the video that the last time is not finished watching, user's operation is reduced, improve the experience sense of user.
To achieve the above object, the invention provides a kind of terminal, the terminal includes:Detection module, display module and broadcast Amplification module.
Detection module, under Video Applications interface, detecting the hand high up in the air in front of predeterminated position on Video Applications interface Gesture;
Display module, for determining that the gesture high up in the air for detecting matches with default first trigger action when detection module When, the label of the first video once watched in predetermined position displaying;Wherein the first video last time does not play Video;
Playing module, for being detected when detection module during the second trigger action for the label, commences play out first Video;Wherein, the start time point for commencing play out is time point when last stopping is played.
Alternatively, the gesture high up in the air on detection module detection Video Applications interface in front of predeterminated position includes:
Detect the returned data of each proximity transducer in default proximity sensor arrays;
The size of the time of each proximity transducer returned data and the data of return is parsed from returned data;
The size of the data of time and return according to each proximity transducer returned data determines each close to sensing The response order of device and during gesture high up in the air is implemented hand and each proximity transducer range conversion;
Response order according to each proximity transducer and hand and each proximity transducer during gesture high up in the air is implemented Range conversion determine the gesture high up in the air.
Alternatively, detection module determine the gesture high up in the air that detects and default first trigger action match including:
The gesture high up in the air that will be detected compares with default first trigger action;
When the gesture high up in the air for detecting is identical with default first trigger action or similarity is more than or equal to pre- If similarity threshold when, it is determined that the gesture high up in the air for detecting matches with default first trigger action.
Alternatively, the second trigger action includes one or more of:Clicking operation, slide, gesture and language high up in the air Sound order.
Alternatively, the label of the first video includes:The link of the breviary window or the first video of the first video.
Alternatively, playing module commences play out the first video and includes:
The first video is played out after breviary window is amplified;Or,
First video is opened by default link and is played out.
Additionally, to achieve the above object, the invention allows for a kind of video broadcasting method, the method includes:
Under Video Applications interface, the gesture high up in the air in front of predeterminated position on Video Applications interface is detected;
When it is determined that the gesture high up in the air for detecting matches with default first trigger action, in predetermined position displaying The label of the first video once watched;Wherein the first video is the last video not played;
When the second trigger action for the label is detected, the first video is commenced play out;Wherein, what is commenced play out rises Time point beginning is time point when last stopping is played.
Alternatively, the gesture high up in the air on detection Video Applications interface in front of predeterminated position includes:
Detect the returned data of each proximity transducer in default proximity sensor arrays;
The size of the time of each proximity transducer returned data and the data of return is parsed from returned data;
The size of the data of time and return according to each proximity transducer returned data determines each close to sensing The response order of device and during gesture high up in the air is implemented hand and each proximity transducer range conversion;
Response order according to each proximity transducer and hand and each proximity transducer during gesture high up in the air is implemented Range conversion determine the gesture high up in the air.
Optionally it is determined that the gesture high up in the air for detecting and default first trigger action match including:
The gesture high up in the air that will be detected compares with default first trigger action;
When the gesture high up in the air for detecting is identical with default first trigger action or similarity is more than or equal to pre- If similarity threshold when, it is determined that the gesture high up in the air for detecting matches with default first trigger action.
Alternatively, the second trigger action includes one or more of:Clicking operation, slide, gesture and language high up in the air Sound order.
Alternatively, the label of the first video includes:The link of the breviary window or the first video of the first video.
Alternatively, commencing play out the first video includes:
The first video is played out after breviary window is amplified;Or,
First video is opened by default link and is played out.
The present invention proposes a kind of terminal and video broadcasting method, and the terminal includes:Detection module, display module and broadcasting Module.Detection module detects the gesture high up in the air in front of predeterminated position on Video Applications interface under Video Applications interface;Work as detection When module determines that the gesture high up in the air for detecting matches with default first trigger action, display module shows in predetermined position The label of the first video of last time viewing;Wherein the first video is the last video not played;When detection module detection During to the second trigger action for being directed to the label, playing module commences play out the first video;Wherein, the initial time for commencing play out Point is time point when last stopping is played.By the scheme of the embodiment of the present invention, want to continue to watch once in user During the video do not finished watching, user's operation is reduced, improve the experience sense of user.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing the optional mobile terminal of each embodiment one of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the composition frame chart of the terminal of the embodiment of the present invention;
Fig. 4 is the video broadcasting method flow chart of the embodiment of the present invention;
Fig. 5 is the method flow of the gesture high up in the air in front of predeterminated position on the detection Video Applications interface of the embodiment of the present invention Figure;
Fig. 6 is the proximity transducer distribution array first embodiment schematic diagram of the embodiment of the present invention;
Fig. 7 is the proximity transducer distribution array second embodiment schematic diagram of the embodiment of the present invention;
Fig. 8 (a) is the first operation identification embodiment schematic diagram in the distribution array first embodiment of the embodiment of the present invention;
Fig. 8 (b) is the second operation identification embodiment schematic diagram in the distribution array first embodiment of the embodiment of the present invention;
Fig. 8 (c) is the 3rd operation identification embodiment schematic diagram in the distribution array first embodiment of the embodiment of the present invention;
Fig. 8 (d) is the 4th operation identification embodiment schematic diagram in the distribution array first embodiment of the embodiment of the present invention;
Fig. 9 (a) is the first operation identification embodiment schematic diagram in the distribution array second embodiment of the embodiment of the present invention;
Fig. 9 (b) is the second operation identification embodiment schematic diagram in the distribution array second embodiment of the embodiment of the present invention;
Fig. 9 (c) is the 3rd operation identification embodiment schematic diagram in the distribution array second embodiment of the embodiment of the present invention;
Fig. 9 (d) is the 4th operation identification embodiment schematic diagram in the distribution array second embodiment of the embodiment of the present invention;
Figure 10 is the label display schematic diagram of the embodiment of the present invention;
Figure 11 is the first video playback schematic diagram of the embodiment of the present invention.
The realization of the object of the invention, functional characteristics and advantage will be described further referring to the drawings in conjunction with the embodiments.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The optional mobile terminal of each embodiment one of the invention is realized referring now to Description of Drawings.In follow-up description In, using the suffix of such as " module ", " part " or " unit " for representing element only for being conducive to explanation of the invention, Itself do not have specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP The mobile terminal of (portable media player), guider etc. and such as numeral TV, desktop computer etc. are consolidated Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that, except being used in particular for movement Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is that the hardware configuration of the mobile terminal for realizing each embodiment of the invention is illustrated.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups for showing Part.More or less component can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 1 00 and wireless communication system Or the radio communication between network.For example, wireless communication unit can include broadcasting reception module 111, mobile communication module 112nd, at least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting Signal etc..And, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase Pass information can also be provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould Block 112 is received.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of DMB (DMB) The form of program guide (EPG), the electronic service guidebooks (ESG) of digital video broadcast-handheld (DVB-H) etc. and exist.Broadcast Receiver module 111 can receive signal and broadcast by using various types of broadcast systems.Especially, broadcasting reception module 111 Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video Broadcast-hand-held (DVB-H), forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111 may be constructed such that and be adapted to provide for extensively Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via broadcasting reception module 111 receive broadcast singal and/ Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module Typical case be GPS (global positioning system).According to current technology, GPS module 115 is calculated and comes from three or more satellites Range information and correct time information and the Information application triangulation for calculating, so as to according to longitude, latitude Highly accurately calculate three-dimensional current location information.Currently, defended using three for calculating the method for position and temporal information Star and the position that is calculated by using other satellite correction and the error of temporal information.Additionally, GPS module 115 Can be by Continuous plus current location information in real time come calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121 Microphone 1220, the static map that 121 pairs, camera is obtained in Video Capture pattern or image capture mode by image capture apparatus The view data of piece or video is processed.Picture frame after treatment may be displayed on display unit 151.At camera 121 Picture frame after reason can be stored in memory 160 (or other storage mediums) or carried out via wireless communication unit 110 Send, two or more cameras 1210 can be provided according to the construction of mobile terminal.Microphone 122 can be in telephone relation mould Sound (voice data) is received via microphone in formula, logging mode, speech recognition mode etc. operational mode, and can be by Such acoustic processing is voice data.Audio (voice) data after treatment can be changed in the case of telephone calling model For the form that can be sent to mobile communication base station via mobile communication module 112 is exported.Microphone 122 can implement all kinds Noise eliminate (or suppress) algorithm eliminating (or suppression) in the noise for receiving and producing during sending audio signal or Person disturbs.
User input unit 130 can generate key input data to control each of mobile terminal according to the order of user input Plant operation.User input unit 130 allows the various types of information of user input, and can include keyboard, metal dome, touch Plate (for example, detection due to being touched caused by resistance, pressure, electric capacity etc. change sensitive component), roller, rocking bar etc. Deng.Especially, when touch pad is superimposed upon on display unit 151 in the form of layer, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 1 00, (for example, mobile terminal 1 00 opens or closes shape State), the presence or absence of the contact (that is, touch input) of the position of mobile terminal 1 00, user for mobile terminal 1 00, mobile terminal The acceleration or deceleration movement of 100 orientation, mobile terminal 1 00 and direction etc., and generate for controlling mobile terminal 1 00 The order of operation or signal.For example, when mobile terminal 1 00 is embodied as sliding-type mobile phone, sensing unit 140 can be sensed The sliding-type phone is opened or closed.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or Whether person's interface unit 170 couples with external device (ED).Sensing unit 140 can will be combined below including proximity transducer 1410 Touch-screen is described to this.
Interface unit 170 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example, External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Identification module can be that storage uses each of mobile terminal 1 00 for verifying user Kind of information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included Etc..In addition, the device (hereinafter referred to as " identifying device ") with identification module can take the form of smart card, therefore, know Other device can be connected via port or other attachment means with mobile terminal 1 00.Interface unit 170 can be used for reception and come from The input (for example, data message, electric power etc.) of the external device (ED) and input that will be received is transferred in mobile terminal 1 00 One or more elements can be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 170 can serve as allowing by it by electricity Power provides to the path of mobile terminal 1 00 from base or can serve as allowing the various command signals being input into from base to pass through it It is transferred to the path of mobile terminal.Be can serve as recognizing that mobile terminal is from the various command signals or electric power of base input The no signal being accurately fitted within base.Output unit 150 is configured to provide defeated with vision, audio and/or tactile manner Go out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include display Unit 151, dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity During words call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure UI or GUI of picture and correlation function etc..
Meanwhile, when display unit 151 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit 151 can serve as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT) In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific Desired implementation method, mobile terminal 1 00 can include two or more display units (or other display devices), for example, moving Dynamic terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect touch Input pressure and touch input position and touch input area.
Dio Output Modules 152 can mobile terminal be in call signal reception pattern, call mode, logging mode, It is that wireless communication unit 110 is received or in memory 160 when under the isotypes such as speech recognition mode, broadcast reception mode The voice data transducing audio signal of middle storage and it is output as sound.And, dio Output Modules 152 can be provided and movement The audio output (for example, call signal receives sound, message sink sound etc.) of the specific function correlation that terminal 100 is performed. Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output and be notified to mobile terminal 1 00 with by event.Typical event can be with Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video is exported, alarm unit 153 can in a different manner provide output with the generation of notification event.For example, alarm unit 153 can be in the form of vibrating Output is provided, when calling, message or some other entrance communication (incomingcommunication) are received, alarm list Unit 153 can provide tactile output (that is, vibrating) to notify to user.Exported by providing such tactile, even if When in pocket of the mobile phone of user in user, user also can recognize that the generation of various events.Alarm unit 153 The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
Memory 160 can store software program for the treatment and control operation performed by controller 180 etc., Huo Zheke Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And And, memory 160 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal Data.
Memory 160 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, many Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can perform memory with by network connection The network storage device cooperation of 160 store function.
The overall operation of the generally control mobile terminal of controller 180.For example, controller 180 is performed and voice call, data Communication, video calling etc. related control and treatment.In addition, controller 180 can be included for reproducing (or playback) many matchmakers The multi-media module 1810 of volume data, multi-media module 1810 can be constructed in controller 180, or can be structured as and control Device processed 180 is separated.Controller 180 can be with execution pattern identifying processing, the handwriting input that will be performed on the touchscreen or figure Piece draws input and is identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each unit Appropriate electric power needed for part and component.
Various implementation methods described herein can be with use such as computer software, hardware or its any combination of calculating Machine computer-readable recording medium is implemented.Implement for hardware, implementation method described herein can be by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein At least one in electronic unit is implemented, and in some cases, such implementation method can be implemented in controller 180. For software implementation, the implementation method of such as process or function can with allow to perform the single of at least one function or operation Software module is implemented.Software code can be come by the software application (or program) write with any appropriate programming language Implement, software code can be stored in memory 160 and performed by controller 180.
So far, oneself according to its function through describing mobile terminal.Below, for the sake of brevity, will description such as folded form, Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc. is used as showing Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 1 00 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any Line and wireless communication system and satellite-based communication system are operated.
The communication system that mobile terminal wherein of the invention can be operated is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system can include multiple mobile terminal 1s 00, multiple base station (BS) 270, base station Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN) 290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link. If any one in the interface that back haul link can be known according to Ganji is constructed, the interface includes such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system can include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction Each subregion of line covering is radially away from BS270.Or, each subregion can be by two or more for diversity reception Antenna is covered.Each BS270 may be constructed such that the multiple frequency distribution of support, and the distribution of each frequency has specific frequency spectrum (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS270 can be claimed It is multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system 100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT295 Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful Location information.GPS module 115 as shown in Figure 1 is generally configured to coordinate with satellite 300 to be believed with obtaining desired positioning Breath.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use other of the position of mobile terminal can be tracked Technology.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb transmission.
Used as a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminal 1s 00 Signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of the reception of certain base station 270 is anti- Processed in specific BS270 to link signal.The data of acquisition are forwarded to the BSC275 of correlation.BSC provides call Resource allocation and the mobile management function of the coordination including the soft switching process between BS270.The number that BSC275 will also be received According to MSC280 is routed to, it provides the extra route service for forming interface with PSTN290.Similarly, PSTN290 with MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 correspondingly controls BS270 with by forward link signals It is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal hardware configuration and communication system, the inventive method each embodiment is proposed.
As shown in figure 3, first embodiment of the invention proposes a kind of terminal 1, the terminal includes:Detection module 11, displaying Module 12 and playing module 13.
Detection module 11, it is high up in the air in front of predeterminated position on Video Applications interface under Video Applications interface, detecting Gesture.
Display module 12, for the gesture high up in the air and default first trigger action phase that are detected when detection module determination Timing, the label of the first video once watched in predetermined position displaying;Wherein the first video is not played for the last time Video.
Playing module 13, for being detected when detection module during the second trigger action for the label, commences play out One video;Wherein, the start time point for commencing play out is time point when last stopping is played.
Alternatively, the gesture high up in the air on the detection Video Applications of detection module 11 interface in front of predeterminated position includes:
Detect the returned data of each proximity transducer in default proximity sensor arrays;
The size of the time of each proximity transducer returned data and the data of return is parsed from returned data;
The size of the data of time and return according to each proximity transducer returned data determines each close to sensing The response order of device and during gesture high up in the air is implemented hand and each proximity transducer range conversion;
Response order according to each proximity transducer and hand and each proximity transducer during gesture high up in the air is implemented Range conversion determine the gesture high up in the air.
Alternatively, detection module 11 determine the gesture high up in the air that detects and default first trigger action match including:
The gesture high up in the air that will be detected compares with default first trigger action;
When the gesture high up in the air for detecting is identical with default first trigger action or similarity is more than or equal to pre- If similarity threshold when, it is determined that the gesture high up in the air for detecting matches with default first trigger action.
Alternatively, the second trigger action includes one or more of:Clicking operation, slide, gesture and language high up in the air Sound order.
Alternatively, the label of the first video includes:The link of the breviary window or the first video of the first video.
Alternatively, playing module 13 commences play out the first video and includes:
The first video is played out after breviary window is amplified;Or,
First video is opened by default link and is played out.
Additionally, to achieve the above object, the invention allows for a kind of video broadcasting method, as shown in figure 4, the method bag Include S101-S103:
S101, under Video Applications interface, detect the gesture high up in the air in front of predeterminated position on Video Applications interfaces.
In embodiments of the present invention, when user wants to continue to watch the video do not finished watching once, in order to reduce user Operation, the label of video can be pre-set in the predeterminated position on Video Applications interface, when user open Video Applications with Afterwards, predetermined registration operation can be carried out in the predetermined position, gesture of soaring aloft described above, to recall the label, and by the label The last video do not finished watching directly is commenced play out from last stopping place.It should be noted that the predeterminated position can be with It is the optional position on Video Applications interface, is not limited for its particular location herein.
, it is necessary to first detect the gesture high up in the air on Video Applications interface in front of predeterminated position before the program is implemented, specifically Can be realized by following scheme.
Alternatively, as shown in figure 5, the gesture high up in the air on detection Video Applications interface in front of predeterminated position includes S201- S204:
The returned data of S201, each proximity transducer detected in default proximity sensor arrays.
In embodiments of the present invention, for gesture of soaring aloft detection can by terminal predetermined position close to sensing Device array is realized.The proximity sensor arrays refer to that default one or more proximity transducers keep default distribution battle array Row;Each proximity transducer in distribution array is provided with different labels according to preset rules.
Wherein, distribution array includes:
The grid being made up of a proximity transducer of often row in transverse direction and the b proximity transducer of each column on longitudinal direction Array;A is positive integer, a >=1;B is positive integer, b >=1;And,
By the proximity transducer put with α angles it is spaced with the proximity transducer put with β angles constitute it is trapezoidal The sensor array of array and/or multiple trapezoidal array compositions.
In embodiments of the present invention, before the distribution array for introducing proximity transducer, can first to the embodiment of the present invention In horizontal and vertical be defined.If the placement location for defining terminal is terminal screen user oriented, and four week of screen Two peripheries more long in side and plane-parallel, two shorter peripheries and horizontal plane, then can determine now more long Two peripheries where direction for laterally, direction where two shorter peripheries is longitudinal direction, and what is be distributed in the horizontal is each Row is properly termed as row, and each row being distributed in the vertical is properly termed as row.
Defined based on above-mentioned direction, in an embodiment of the invention, the distribution array could be arranged to:In the horizontal Often row a proximity transducer is set, each column in the vertical sets b proximity transducer, and the a+b proximity transducer is constituted One grid array.Wherein, the concrete numerical value of a and b can not do specific limit according to demand or the size of terminal is voluntarily set System.For example, often row can set 3 proximity transducers in transverse direction, each column can set 2 proximity transducers, i.e. a=on longitudinal direction 3, b=2;Or, often row and each column are respectively provided with 2 proximity transducers, i.e. a=b=2;As shown in Figure 6.
In another embodiment, the distribution array may be arranged as:With the proximity transducer that α angles are put It is spaced with the proximity transducer put with β angles, constitute a trapezoidal array or parallelogram array;Or, it is multiple Such trapezoidal array and/or parallelogram array constitute a new array;In this embodiment, each trapezoidal array and The quantity of the proximity transducer in parallelogram array is not limited, in addition, the concrete numerical value of α angles and β angles is not done Limitation, can constitute array of different shapes according to different angular values, alternatively, in order to formed above-mentioned trapezoidal array and Parallelogram array, can set 0 < alpha-beta≤90 °, as shown in Figure 7.
In addition, in embodiments of the present invention, for the ease of clearly determining each proximity transducer in subsequent schedule Change situation, different labels are provided with to each proximity transducer, and setting rule can be according to different application scenarios voluntarily Definition, for example, according to the erection sequence of each proximity transducer, or the front and back position according to each sensor in distribution array Relation etc..In addition, set label can also self-defining, be not particularly limited, for example, using alphabetical A, B, C, D, E ..., Using numeral 1,2,3,4,5 ..., using title etc..
By the setting of above-mentioned proximity sensor arrays, can enable that each proximity transducer is timely, quick anti- The change in any direction of object or finger of the feedback in the induction range of the proximity transducer, improves the identification of operational motion Efficiency, and various operational motions can be expanded on this basis so that user can be according to liking or need self-defining not Same operational motion, and the operation is no longer confined to single simple operations, can be arbitrary one or more complex operations, Improve Consumer's Experience sense.
In embodiments of the present invention, the setting based on above-mentioned proximity sensor arrays, just can implement implementation of the present invention The operation high up in the air of example is identified.Before this, it is necessary to first detect returning for one or more proximity transducers set in terminal Data are returned, the returned data is when finger or the felt pen of supporting setting are close to terminal, specifically, close close in terminal During sensor, the delta data that each proximity transducer is produced according to the evolution of finger or felt pen.
The data of S202, the time for parsing from returned data each proximity transducer returned data and return it is big It is small.
In embodiments of the present invention, when the returned data that proximity sensor arrays send in real time is detected, terminal can be right The returned data carries out implementation parsing, to parse the peration data information included in the returned data.Specifically, the operation Data message includes:The time of each proximity transducer returned data and the size of the data of return.
In embodiments of the present invention, due to the implementation track difference that each is operated, the returned data for receiving is also different, so And, if the distribution array of proximity transducer is different, each proximity transducer is also different to the feedback of operation of soaring aloft, so as to produce The different returned data of life, therefore, before the implementation track of parsing operation high up in the air, it would be desirable to first determine proximity transducer Distribution array.Illustrated by taking two kinds of distribution arrays as an example below.
In one embodiment, 2 proximity transducers first can be respectively provided with each column with often capable inside shell after terminal As a example by illustrate, tetra- proximity transducers of A, B, C, D as shown in Figure 6.
As shown in Fig. 8 (a), when two fingers of user are in the projection overhead of proximity sensor arrays A, B and C, D same time-division When not doing screwing action clockwise, sensors A, the returned data S of DA、SDValue can reflect with user's finger movement simultaneously Gradually weaken, and the returned data S of sensor B, CB、SCValue can reflect with the movement of user's finger while gradually strengthening;And And can reflect sensors A and sensor D preferential change simultaneously, during the change prior to sensor B of the transformation period of sensors A Between;And the transformation period of sensor D is prior to the transformation period of sensor C.
As shown in Fig. 8 (b), when two fingers of user are in the projection overhead of proximity sensor arrays A, B and C, D same time-division When not doing screwing action counterclockwise, sensors A, the returned data S of DA、SDValue can reflect with user's finger movement simultaneously Gradually strengthen, and the returned data S of sensor B, CB、SCValue can reflect with the movement of user's finger while gradually weakening;And And can reflect sensor B and sensor C preferential changes simultaneously, the transformation period of sensor B prior to sensors A change when Between;And the transformation period of sensor C is prior to the transformation period of sensor D.
As shown in Fig. 8 (c), when one or more fingers of user are in proximity sensor arrays A, B and the projection overhead of C, D When doing upward linear slide action simultaneously, the returned data S of sensor C, DC、SDValue can reflect with the shifting of user's finger It is dynamic gradually to weaken simultaneously, and the returned data S of sensors A, BA、SBValue can reflect as the movement of user's finger is while gradually Enhancing;And sensor C and sensor D preferential changes simultaneously can be reflected, sensors A and sensor B lag behind sensing simultaneously Device C and sensor D start change.
As shown in Fig. 8 (d), when one or more fingers of user are in proximity sensor arrays A, B and the projection overhead of C, D When doing downward linear slide action simultaneously, the returned data S of sensor C, DC、SDValue can reflect with the shifting of user's finger It is dynamic gradually to strengthen simultaneously, and the returned data S of sensors A, BA、SBValue can reflect as the movement of user's finger is while gradually Weaken;And sensors A and sensor B preferential change simultaneously can be reflected, sensor C and sensor D lags behind sensing simultaneously Device A and sensor B start change.
In another embodiment, can be setting the trapezoidal battle array of proximity transducer as shown in Figure 7 inside shell after terminal It is classified as example to illustrate, tetra- proximity transducers of A, B, C, D, E as shown in Figure 7.
As shown in Fig. 9 (a), when one or more fingers simultaneously in the trapezoidal array of proximity transducer A, B, C projection Overhead is done when draw a circle clockwise action, sensors A, the return value S of B, CA、SB、SCValue can reflect with the movement of user's finger Weakened by being enhanced to successively and circulated with the lasting movement of finger, and the returned data S of sensor D, ED、SEValue can always compared with It is weak.
As shown in Fig. 9 (b), when one or more fingers simultaneously in the trapezoidal array of proximity transducer B, C, D projection Overhead is done when draw a circle clockwise action, the return value S of sensor B, D, CB、SD、SCValue can reflect with the movement of user's finger Weakened by being enhanced to successively and circulated with the lasting movement of finger, and the returned data S of sensors A, EA、SEValue can always compared with It is weak.
As shown in Fig. 9 (c), when one or more fingers simultaneously in the trapezoidal array of proximity transducer C, D, E projection Overhead is done when draw a circle clockwise action, the return value S of sensor C, D, EC、SD、SEValue can reflect with the movement of user's finger Weakened by being enhanced to successively and circulated with the lasting movement of finger, and the returned data S of sensors A, BA、SBValue can always compared with It is weak.
As shown in Fig. 9 (d), when one or more fingers simultaneously in the trapezoidal array of proximity transducer A, B, C projection Overhead is done when draw a circle counterclockwise action, the return value S of sensor C, B, AC、SB、SAValue can reflect with the movement of user's finger Weakened by being enhanced to successively and circulated with the lasting movement of finger, and the returned data S of sensor D, ED、SEValue can always compared with It is weak.
The size of the data of S203, the time according to each proximity transducer returned data and return determines that each is approached The response order of sensor and during gesture high up in the air is implemented hand and each proximity transducer range conversion.
In embodiments of the present invention, by the content of step S202, each is obtained close to biography by returned data After the size of the time of sensor returned data and the data of return, just can determine that the response of each proximity transducer is suitable Sequence and during gesture high up in the air is implemented hand and each proximity transducer range conversion.
S204, the response order according to each proximity transducer and hand and each close biography during gesture high up in the air is implemented The range conversion of sensor determines the gesture high up in the air.
In embodiments of the present invention, when the response order of each proximity transducer and during gesture high up in the air is implemented hand with After the range conversion of each proximity transducer determines, specifically gesture high up in the air just can be determined.
S102, when it is determined that the gesture high up in the air for detecting matches with default first trigger action, in predetermined position The label of the first video of the last viewing of displaying;Wherein the first video is the last video not played.
In embodiments of the present invention, by the scheme in step S101 detect user implement gesture high up in the air after, can Compared with the first trigger action for prestoring with by the gesture high up in the air, as the gesture high up in the air for detecting and the first triggering behaviour for prestoring When work matches, it is determined that the gesture high up in the air is the trigger action of the label of the first video for opening last viewing, works as detection When the gesture high up in the air for going out is mismatched with the first trigger action for prestoring, then can ignore the gesture high up in the air that current detection goes out.Here Match refers to that both identical or both similarities are more than or equal to default similarity threshold.
Optionally it is determined that the gesture high up in the air for detecting and default first trigger action match including:
The gesture high up in the air that will be detected compares with default first trigger action;
When the gesture high up in the air for detecting is identical with default first trigger action or similarity is more than or equal to pre- If similarity threshold when, it is determined that the gesture high up in the air for detecting matches with default first trigger action.
In embodiments of the present invention, the similarity threshold can not do herein according to different application scenarios self-definings Concrete restriction.
In embodiments of the present invention, the gesture high up in the air for being detected by such scheme determination and default first trigger action After matching, the label of the first video that just can be once watched in predetermined position displaying, as shown in Figure 10;This first Video is the last video not played.Specific exhibition method can be ejection or (past such as from a left side according to preset order The right side is turned left from the right side) launched.Alternatively, the label of the first video includes:The breviary window of the first video first is regarded The link of frequency.
In other embodiments, according to label form is different, and exhibition method can also be different.For example, the label may be used also Being Warning Mark, image or an animation.Wherein, when the label is the animation of monkey jump, exhibition method can be with Showed in the way of monkey jumps.
S103, when the second trigger action for the label is detected, commence play out the first video;Wherein, start to broadcast The start time point put is time point when last stopping is played.
In embodiments of the present invention, shown on Video Applications interface the label of the video not played once with Afterwards, user can implement the second trigger action continuing with the label, and second trigger action is order terminal plays first The operation of video.
In embodiments of the present invention, second trigger action can include one or more of:Programmable button is pressed Press operation, the touch operation to touch-screen, speech command operation, gesture operation high up in the air, holding operation, Password Input, identity are tested Card etc., wherein authentication can include recognition of face, iris recognition, speech recognition, fingerprint recognition or identification etc. of tatooing.This In be not limited for specific triggering mode, it is any be capable of triggering terminal play the first video operation the present invention implementation Within the protection domain of example.In view of user when video is played without hope there is excessively complicated operation, second trigger action What can be set is fairly simple, for example, setting button in terminal, the first video is played by pressing triggering terminal;Or it is logical Cross voice command triggering terminal and play the first video, the voice command can compare including cough, sound or count off sound etc. Simple voice command form;Or the first video is played by another gesture triggering terminal simply high up in the air.
In embodiments of the present invention, the diversified forms based on the second above-mentioned trigger action, for the second trigger action Detection mode is also varied.For example, for touch operation, can be detected using default fingerprint identification device;For Recognition of face in authentication, can be examined using default face identification device and/or corresponding face recognition algorithms Survey;For holding operation, can be combined with fingerprint identification device using default pressure sensor, be examined by pressure sensor The gripping dynamics of user is surveyed, and finger and the position where palm are detected by fingerprint identification device, so that it is determined that user Grip scope.Therefore, different detection method and device can be used for different operation formats, in the embodiment of the present invention In, be not particularly limited for the second trigger action, similarly for the second trigger action specific detection method and device not yet It is limited, specific detection means and detection method can be defined according to corresponding action type.
Alternatively, commencing play out the first video includes:
The first video is played out after breviary window is amplified;Or,
First video is opened by default link and is played out.
In embodiments of the present invention, by such scheme detect for play the first video the second trigger action with Afterwards, just directly the first video can be played out, as shown in figure 11.Specific broadcasting form can be according to the specific of the label Form is defined.For example, when the breviary window that the label is the first video, to first after can directly breviary window be amplified Video is played out.When the link that the label is the first video, by default link the first video of opening and can carry out Play.In other embodiments, such as previous embodiment, when the label is the animation that a monkey jumps, regarded playing first During frequency, the video window that can pull out the side for being hidden in Video Applications interface by the monkey is broadcast with to the first video Put.First can also be played by other forms according to the other forms of the label of the first video in other embodiments to regard Frequently, no longer illustrate one by one herein.
So far by the agency of be over the embodiment of the present invention whole essential characteristics, it is necessary to explanation, the above is One or more implementation methods of the embodiment of the present invention, are not full content of the invention, in other embodiments, can be with Using other implementation methods, any and similar or identical implementation method of the embodiment of the present invention, and the embodiment of the present invention Any combination of essential characteristic, within the protection domain of the embodiment of the present invention.
The present invention proposes a kind of terminal, and the terminal includes:Detection module, display module and playing module.Detection module Under Video Applications interface, the gesture high up in the air in front of predeterminated position on Video Applications interface is detected;When detection module determines to detect When the gesture high up in the air for going out matches with default first trigger action, what display module was once watched in predetermined position displaying The label of the first video;Wherein the first video is the last video not played;When detection module is detected for the label The second trigger action when, playing module commences play out the first video;Wherein, the start time point for commencing play out stops for the last time Time point when only playing.By the scheme of the embodiment of the present invention, want to continue to watch the video do not finished watching once in user When, user's operation is reduced, improve the experience sense of user.
It should be noted that herein, term " including ", "comprising" or its any other variant be intended to non-row His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and And also include other key elements being not expressly set out, or also include for this process, method, article or device institute are intrinsic Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably implementation method.Based on such understanding, technical scheme is substantially done to prior art in other words The part for going out contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions are used to so that a station terminal equipment (can be mobile phone, computer, clothes Business device, air-conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
The preferred embodiments of the present invention are these are only, the scope of the claims of the invention is not thereby limited, it is every to utilize this hair Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of terminal, it is characterised in that the terminal includes:Detection module, display module and playing module;
The detection module, under Video Applications interface, detecting the icepro in front of predeterminated position on the Video Applications interface Empty-handed gesture;
The display module, for determining that the gesture described high up in the air and default first for detecting triggers behaviour when the detection module When work matches, the label of the first video once watched in predetermined position displaying;Wherein described first video is The video that last time does not play;
The playing module, for being detected when the detection module during the second trigger action for the label, starts to broadcast Put first video;Wherein, the start time point for commencing play out is time point when last stopping is played.
2. terminal as claimed in claim 1, it is characterised in that the detection module detected and preset on the Video Applications interface Gesture high up in the air in front of position includes:
Detect the returned data of each proximity transducer in default proximity sensor arrays;
The size of the time of each proximity transducer returned data and the data of return is parsed from the returned data;
Each is approached described in the size determination of the data of time and return according to each proximity transducer returned data The response order of sensor and during the gesture high up in the air is implemented hand and each proximity transducer range conversion;
Response order according to each proximity transducer and hand and each close biography during the gesture high up in the air is implemented The range conversion of sensor determines the gesture high up in the air.
3. terminal as claimed in claim 1, it is characterised in that the detection module determine the gesture described high up in the air that detects with Default first trigger action match including:
The gesture described high up in the air that will be detected compares with default first trigger action;
When the gesture described high up in the air for detecting is identical with default first trigger action or similarity is more than or waits When default similarity threshold, it is determined that the gesture described high up in the air for detecting matches with default first trigger action.
4. terminal as claimed in claim 1, it is characterised in that second trigger action includes one or more of:Point Hit operation, slide, gesture and voice command high up in the air;
The label of first video includes:The link of the breviary window of first video or first video.
5. terminal as claimed in claim 4, it is characterised in that the playing module commences play out first video to be included:
First video is played out after the breviary window is amplified;Or,
First video is opened by default link and is played out.
6. a kind of video broadcasting method, it is characterised in that methods described includes:
Under Video Applications interface, the gesture high up in the air in front of predeterminated position on the Video Applications interface is detected;
When it is determined that the gesture described high up in the air for detecting matches with default first trigger action, in the predetermined position exhibition Show the label of the first video of last viewing;Wherein described first video is the last video not played;
When the second trigger action for the label is detected, first video is commenced play out;Wherein, commence play out Start time point is time point when last stopping is played.
7. video broadcasting method as claimed in claim 6, it is characterised in that preset on the detection Video Applications interface Gesture high up in the air in front of position includes:
Detect the returned data of each proximity transducer in default proximity sensor arrays;
The size of the time of each proximity transducer returned data and the data of return is parsed from the returned data;
Each is approached described in the size determination of the data of time and return according to each proximity transducer returned data The response order of sensor and during the gesture high up in the air is implemented hand and each proximity transducer range conversion;
Response order according to each proximity transducer and hand and each close biography during the gesture high up in the air is implemented The range conversion of sensor determines the gesture high up in the air.
8. video broadcasting method as claimed in claim 6, it is characterised in that the gesture described high up in the air that the determination is detected with Default first trigger action match including:
The gesture described high up in the air that will be detected compares with default first trigger action;
When the gesture described high up in the air for detecting is identical with default first trigger action or similarity is more than or waits When default similarity threshold, it is determined that the gesture described high up in the air for detecting matches with default first trigger action.
9. video broadcasting method as claimed in claim 6, it is characterised in that second trigger action include it is following a kind of or It is various:Clicking operation, slide, gesture and voice command high up in the air;
The label of first video includes:The link of the breviary window of first video or first video.
10. video broadcasting method as claimed in claim 9, it is characterised in that described to commence play out first video and include:
First video is played out after the breviary window is amplified;Or,
First video is opened by default link and is played out.
CN201710004991.4A 2017-01-04 2017-01-04 Terminal and video playing method Expired - Fee Related CN106792224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710004991.4A CN106792224B (en) 2017-01-04 2017-01-04 Terminal and video playing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710004991.4A CN106792224B (en) 2017-01-04 2017-01-04 Terminal and video playing method

Publications (2)

Publication Number Publication Date
CN106792224A true CN106792224A (en) 2017-05-31
CN106792224B CN106792224B (en) 2020-06-09

Family

ID=58949945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710004991.4A Expired - Fee Related CN106792224B (en) 2017-01-04 2017-01-04 Terminal and video playing method

Country Status (1)

Country Link
CN (1) CN106792224B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108279846A (en) * 2018-01-31 2018-07-13 北京硬壳科技有限公司 A kind of gesture model establishes, the method and device of gesture authentication
WO2019041539A1 (en) * 2017-09-01 2019-03-07 深圳市沃特沃德股份有限公司 Video playing method and device, and vehicle-mounted system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186234A (en) * 2011-12-31 2013-07-03 联想(北京)有限公司 Control method and electronic equipment
KR20140078171A (en) * 2012-12-17 2014-06-25 (주)유티엘코리아 A method for play a contents of augmented reality and a system for excuting the method
US20140199947A1 (en) * 2013-01-11 2014-07-17 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN104166460A (en) * 2013-05-16 2014-11-26 联想(北京)有限公司 Electronic device and information processing method
US20140354536A1 (en) * 2013-05-31 2014-12-04 Lg Electronics Inc. Electronic device and control method thereof
CN104469511A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 Information processing method and electronic device
CN104581409A (en) * 2015-01-22 2015-04-29 广东小天才科技有限公司 Virtual interactive video playing method and device
CN104935739A (en) * 2015-05-29 2015-09-23 努比亚技术有限公司 Audio and video application control method and device
CN105245951A (en) * 2015-09-29 2016-01-13 努比亚技术有限公司 Audio/video file playing device and method
CN105357585A (en) * 2015-08-29 2016-02-24 华为技术有限公司 Method and device for playing video content at any position and time
CN105357381A (en) * 2015-10-28 2016-02-24 努比亚技术有限公司 Terminal operation method and intelligent terminal
CN105389003A (en) * 2015-10-15 2016-03-09 广东欧珀移动通信有限公司 Control method and apparatus for application in mobile terminal
CN105872813A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Hotspot video displaying method and device
CN106210836A (en) * 2016-07-28 2016-12-07 广东小天才科技有限公司 Interactive learning method and device in a kind of video display process, terminal unit
CN106254636A (en) * 2016-07-27 2016-12-21 努比亚技术有限公司 A kind of control method and mobile terminal

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186234A (en) * 2011-12-31 2013-07-03 联想(北京)有限公司 Control method and electronic equipment
KR20140078171A (en) * 2012-12-17 2014-06-25 (주)유티엘코리아 A method for play a contents of augmented reality and a system for excuting the method
US20140199947A1 (en) * 2013-01-11 2014-07-17 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN104166460A (en) * 2013-05-16 2014-11-26 联想(北京)有限公司 Electronic device and information processing method
US20140354536A1 (en) * 2013-05-31 2014-12-04 Lg Electronics Inc. Electronic device and control method thereof
CN104469511A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 Information processing method and electronic device
CN104581409A (en) * 2015-01-22 2015-04-29 广东小天才科技有限公司 Virtual interactive video playing method and device
CN104935739A (en) * 2015-05-29 2015-09-23 努比亚技术有限公司 Audio and video application control method and device
CN105357585A (en) * 2015-08-29 2016-02-24 华为技术有限公司 Method and device for playing video content at any position and time
CN105245951A (en) * 2015-09-29 2016-01-13 努比亚技术有限公司 Audio/video file playing device and method
CN105389003A (en) * 2015-10-15 2016-03-09 广东欧珀移动通信有限公司 Control method and apparatus for application in mobile terminal
CN105357381A (en) * 2015-10-28 2016-02-24 努比亚技术有限公司 Terminal operation method and intelligent terminal
CN105872813A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Hotspot video displaying method and device
CN106254636A (en) * 2016-07-27 2016-12-21 努比亚技术有限公司 A kind of control method and mobile terminal
CN106210836A (en) * 2016-07-28 2016-12-07 广东小天才科技有限公司 Interactive learning method and device in a kind of video display process, terminal unit

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
孙靖: "基于视觉的实时手势识别及其在演示控制中的应用", 《计算技术与自动化》 *
旗讯中文: "《Internet精彩应用24小时轻松掌握》", 31 January 2008, 中国铁道出版社 *
陈义: "基于视觉的手势识别技术在车载主机上的应用", 《电子设计工程》 *
陈茂丹: "智能触屏手机手势交互设计分析与研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019041539A1 (en) * 2017-09-01 2019-03-07 深圳市沃特沃德股份有限公司 Video playing method and device, and vehicle-mounted system
CN108279846A (en) * 2018-01-31 2018-07-13 北京硬壳科技有限公司 A kind of gesture model establishes, the method and device of gesture authentication

Also Published As

Publication number Publication date
CN106792224B (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN104750420B (en) Screenshotss method and device
CN106888158A (en) A kind of instant communicating method and device
CN104808945B (en) The display packing and device of virtual key
CN105357367B (en) Recognition by pressing keys device and method based on pressure sensor
CN104991772B (en) Remote operation bootstrap technique and device
CN106911850A (en) Mobile terminal and its screenshotss method
CN106843724A (en) A kind of mobile terminal screen anti-error-touch device and method, mobile terminal
CN106843723A (en) A kind of application program associates application method and mobile terminal
CN105094543B (en) terminal operation instruction input method and device
CN106293069A (en) The automatic share system of content and method
CN106803058A (en) A kind of terminal and fingerprint identification method
CN106570945A (en) Terminal, check-in machine and check-in method
CN104731508B (en) Audio frequency playing method and device
CN106791155A (en) A kind of volume adjustment device, volume adjusting method and mobile terminal
CN106791187A (en) A kind of mobile terminal and NFC method
CN106775336A (en) A kind of content duplication method, device and terminal
CN106657579A (en) Content sharing method, device and terminal
CN106412316A (en) Media resource playing control device and method
CN106371682A (en) Gesture recognition system based on proximity sensor and method thereof
CN106527685A (en) Control method and device for terminal application
CN106648324A (en) Hidden icon operating method, device and terminal
CN106445148A (en) Method and device for triggering terminal application
CN106791149A (en) A kind of method of mobile terminal and control screen
CN106790941A (en) A kind of way of recording and device
CN106792224A (en) A kind of terminal and video broadcasting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200514

Address after: No. 56, Shiyang Road, Qinhuai District, Nanjing City, Jiangsu Province

Applicant after: NANJING BAIXIA HIGH-TECHNOLOGY INDUSTRY PARK INVESTMENT DEVELOPMENT Co.,Ltd.

Address before: 518000 Guangdong Province, Shenzhen high tech Zone of Nanshan District City, No. 9018 North Central Avenue's innovation building A, 6-8 layer, 10-11 layer, B layer, C District 6-10 District 6 floor

Applicant before: NUBIA TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200609

Termination date: 20210104