CN105430483A - Method and system for mutual control of intelligent terminals - Google Patents

Method and system for mutual control of intelligent terminals Download PDF

Info

Publication number
CN105430483A
CN105430483A CN201510745400.XA CN201510745400A CN105430483A CN 105430483 A CN105430483 A CN 105430483A CN 201510745400 A CN201510745400 A CN 201510745400A CN 105430483 A CN105430483 A CN 105430483A
Authority
CN
China
Prior art keywords
terminal
controlled
control
voice data
controlled terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510745400.XA
Other languages
Chinese (zh)
Other versions
CN105430483B (en
Inventor
谢文君
罗婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Technologies Ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN201510745400.XA priority Critical patent/CN105430483B/en
Publication of CN105430483A publication Critical patent/CN105430483A/en
Application granted granted Critical
Publication of CN105430483B publication Critical patent/CN105430483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4113PC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/4222Remote control device emulator integrated into a non-television apparatus, e.g. a PDA, media center or smart toy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention relates to a method for mutual control of intelligent terminals. The method includes the steps of: establishing connection of a control terminal with a controlled terminal; sending audio data and video data of the controlled terminal to the control terminal through the connection; presenting the audio data and video data received by the control terminal to a user, and receiving at the control terminal a first control instruction sent to the controlled terminal by the user according to the audio data and video data; sending the first control instruction from the control terminal to the controlled terminal, and converting the first control instruction by the controlled terminal into a second control instruction capable of being identified by the controlled terminal so as to control the audio data and video data of the controlled terminal; and sending a control result from the controlled terminal to the control terminal, and presenting the control result to the user by the control terminal. Thus, mutual control between the terminals of two different operating systems is realized.

Description

The mutual facies-controlled method and system of intelligent terminal
Technical field
The present invention relates to communication technical field, particularly relate to the mutual facies-controlled method and system of a kind of intelligent terminal.
Background technology
Along with the arrival of information age, increasing intelligent terminal spreads in people's life.Intelligent terminal comprises computer, smart mobile phone, notebook computer and panel computer etc.Intelligent terminal has independently operating system, can by user's program of providing of the third party service provider such as mounting software, game voluntarily, constantly the function of mobile phone is expanded by this class method, and wireless network access can be realized by mobile communication network.And along with miniaturization, the mobile of multiple intelligent terminal (as smart mobile phone, panel computer), intelligent terminal can be carried with by user, uses whenever and wherever possible, greatly facilitates the use of intelligent terminal.
But current intelligent terminal still fails to offer convenience to people's work and life to greatest extent, and the mutual control between intelligent terminal has difficulties.Such as, because smart mobile phone and common PC (PersonalComputer, personal computer) machine adopt different system architectures, make to be difficult between smart mobile phone and ordinary PC realize mutually controlling.Current smart mobile phone major part uses android system or IOS system, and these systems temporarily do not realize good docking with the window system of main flow, is difficult to be controlled operation between different system by the Interface realization of system own.
Summary of the invention
Based on this, be necessary, for the problem being difficult between intelligent terminal realize mutually controlling, to provide a kind of intelligent terminal mutual facies-controlled method and system.
A kind of mutual facies-controlled method of intelligent terminal, comprises the following steps:
Set up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
By described connection, the voice data of controlled terminal and video data are sent to control terminal;
The voice data received by control terminal and video data present to user, and in the first control command that control terminal reception user sends controlled terminal according to described voice data and video data;
Described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled;
Control result is sent to control terminal from controlled terminal, by control terminal, control result is presented to user.
The mutual facies-controlled system of a kind of intelligent terminal, comprising:
Set up module, for setting up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
First sending module, for being sent to control terminal by described connection by the voice data of controlled terminal and video data;
Receiver module, presents to user for the voice data that received by control terminal and video data, and receives at control terminal the first control command that user sends controlled terminal according to described voice data and video data;
Second sending module, for described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled;
3rd sending module, for control result is sent to control terminal from controlled terminal, presents to user by control terminal by control result.
The mutual facies-controlled method and system of described intelligent terminal, by setting up the connection between control terminal and controlled terminal, the voice data of controlled terminal and video data are sent to control terminal, the voice data received by control terminal and video data present to user, and in the first control command that control terminal reception user sends controlled terminal according to described voice data and video data, described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled, and control result is sent to control terminal from controlled terminal, by control terminal, control result is presented to user, mutual control between the terminal achieving two different operating systems.
Accompanying drawing explanation
Fig. 1 is the mutual facies-controlled method flow diagram of the intelligent terminal of an embodiment;
Fig. 2 is the structural representation of the mutual facies-controlled system of the intelligent terminal of an embodiment;
Fig. 3 is the structural representation of the desktop video sending module of an embodiment;
Fig. 4 is the structural representation of the desktop video display module of an embodiment;
Fig. 5 is the system framework of the UniversalMedia of an embodiment.
Embodiment
Be described below in conjunction with the embodiment of accompanying drawing to the mutual facies-controlled method of intelligent terminal of the present invention.
Fig. 1 is the mutual facies-controlled method flow diagram of the intelligent terminal of an embodiment.As shown in Figure 1, the mutual facies-controlled method of intelligent terminal of the present invention can comprise the following steps:
S1, sets up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
In the process, control terminal can be computer, and controlled terminal can be the intelligent terminal such as mobile phone, panel computer; Control terminal also can be the intelligent terminal such as mobile phone, panel computer, and controlled terminal also can be computer.Described control terminal can be connected by wireless network with between controlled terminal.
S2, is sent to control terminal by described connection by the voice data of controlled terminal and video data;
In this step, the voice data of controlled terminal and video data can be sent to control terminal.Wherein, described voice data can be the voice data that the window in some or certain the several shared region of controlled terminal produces, and also can be all voice datas that whole controlled terminal produces.Described video data is the vedio data of the current display of controlled terminal.Described shared region is the region that on controlled terminal, user needs control terminal to control, and the demand sharing certain region or whole operating system can be determined according to the demand of user, and the division of shared region can be determined according to the application window size when front opening.When controlled terminal has multiple application program to be opened, in order to prevent the information leakage of other application program, the general audio-frequency information that only can gather an application program.
When shared region is the subregion of controlled terminal, according to such as under type, the voice data of controlled terminal and video data can be sent to control terminal:
S21, associates the first window of the voice data of controlled terminal, generation voice data with the primary importance parameter of described first window at controlled terminal, obtains the first association results;
S22, associates the Second Window of the video data of controlled terminal, generation video data with the second place information of described Second Window at controlled terminal, obtains the second association results;
S23, is sent to control terminal by the first association results and the second association results.
In the step s 21, first the application name of described first window can be obtained, then channel component title each in described application name and system sound volume composition manager matched, which window which shared region is pairing result can be used to characterize produces voice data.The primary importance parameter of first window at controlled terminal can be obtained, finally pairing result can be associated with primary importance parameter.The quantity of described first window can be one or more.When associating, can according to described pairing result, for the voice data in each shared region arranges audio signature position; Wherein, described audio signature position is for characterizing in corresponding shared region whether there is voice data.If there is voice data in corresponding shared region, then for the voice data in this shared region arranges audio signature position; If there is not voice data in corresponding shared region, then the audio signature position of the voice data in this shared region is empty.Can described audio signature position be added in primary importance parameter, forming region audio frequency/location parameter collection; Wherein, described area audio/location parameter collection is for setting up the one-to-one relationship between program window in each shared region and the position of described application window.Finally, according to described area audio/location parameter collection, pairing result can be associated with primary importance parameter.By setting up voice data, incidence relation between first window and primary importance parameter, user can be made to get each voice data of window broadcasting and the position of each window of controlled terminal exactly at control terminal, be convenient to user and voice data is controlled.
In step S22, first can capture the video data of the current display of controlled terminal, then to coding video data compression, and the video data after compression and second place parameter correlation be joined, form video/location parameter collection, be sent to control terminal by network.Wherein, described second place Parametric Representation just at the window of displaying video in the position of controlled terminal, described video/location parameter collection for the window of the window and described displaying video of setting up displaying video in each shared region position between one-to-one relationship.
In one embodiment, before voice data is sent to control terminal, also can carry out stereo process to described voice data.After channel component title each in described application name and system sound volume composition manager is carried out the step of matching, described stereo process can be carried out.By stereo process, the sound of multiple application programs intercepted can be combined, cacophonize when preventing the audio frequency of multiple application program from simultaneously broadcasting and be difficult to not hear.When carrying out stereo process, the frequency of original audio data, tonequality etc. can be adjusted separately, make track optimization, be finally superimposed.
In one embodiment, also can carry out audio compression process to the voice data after stereo process, then the voice data after compression is associated with primary importance parameter.
When shared region is the Zone Full of controlled terminal, according to such as under type, the voice data of controlled terminal can be sent to control terminal:
S24, detects the voice data of whole controlled terminal;
S25, associates the default value of the voice data of whole controlled terminal with tape identification position; Wherein, the default value of described band flag bit is for judging that shared region is the subregion of controlled terminal or the Zone Full of controlled terminal;
S26, is sent to control terminal by described association results.
In step s 24 which, the voice data of whole controlled terminal can be regarded as whole sound that controlled terminal that user can hear sends, i.e. the sound of current all application programs opened and system sounds.
In step s 25, when shared region is the Zone Full of controlled terminal, can the default value of generating strap flag, this flag bit is used for critical region to be shared or whole operation systems share, such as, can the default value of another described tape identification position be isAllShare.As isAllShare=true, represent that shared region is the Zone Full of controlled terminal; As isAllShare=false, represent that shared region is the subregion of controlled terminal.
In one embodiment, after the voice data of whole controlled terminal is carried out with the default value of tape identification position the step associated, audio compression process can be carried out to voice data, then the voice data after compression is sent to control terminal.
S3, the voice data received by control terminal and video data present to user, and in the first control command that control terminal reception user sends controlled terminal according to described voice data and video data;
In this step, the voice data received and video data can be presented to user by control terminal, and receive the first control command that user sends described voice data and video data.Such as, when control terminal is computer, described first control command can be mouse action instruction or keyboard operation instruction, as: click the mouse or the operation such as keyboard data input.When control terminal be mobile phone or panel computer time, described first control command can be the gesture operation of user.Described first control command can be associated with the 3rd positional information, and the primary importance information after association is sent to controlled terminal.Wherein, described 3rd positional information is for representing the position of described first control command effect.In this way, controlled terminal can be made to get the active position of the first control command exactly, thus exactly the relevant position of controlled terminal is controlled.
In one embodiment, before described first control command is sent to controlled terminal from control terminal, also synchronously can process described voice data, video data and the first control command at controlled terminal.Synchronous realization can judge according to the timestamp intercepting audio frequency and video, and namely identical time stamp is considered as synchronously.
S4, is sent to controlled terminal by described first control command from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, controls the voice data of controlled terminal and video data;
In step s 4 which, at controlled terminal, can process accordingly the voice data received, video data and the first control command.Such as, when control terminal has carried out encoding compression processing to voice data or video data, controlled terminal can be decoded process accordingly.Described first control command also according to relevant operation transformation agreement, can be converted to discernible second control command of controlled terminal by controlled terminal, controls the voice data of controlled terminal and video data.
S5, is sent to control terminal by control result from controlled terminal, by control terminal, control result is presented to user.
Be described below in conjunction with the embodiment of accompanying drawing to the mutual facies-controlled system of intelligent terminal of the present invention.
Fig. 2 is the structural representation of the mutual facies-controlled system of the intelligent terminal of an embodiment.As shown in Figure 2, the mutual facies-controlled system of intelligent terminal of the present invention can comprise:
Set up module 10, set up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
In the process, control terminal can be computer, and controlled terminal can be the intelligent terminal such as mobile phone, panel computer; Control terminal also can be the intelligent terminal such as mobile phone, panel computer, and controlled terminal also can be computer.Described control terminal can be connected by wireless network with between controlled terminal.
First sending module 20, for being sent to control terminal by described connection by the voice data of controlled terminal and video data;
The voice data of controlled terminal and video data can be sent to control terminal by the first sending module 20.Wherein, described voice data can be the voice data that the window in some or certain the several shared region of controlled terminal produces, and also can be all voice datas that whole controlled terminal produces.Described video data is the vedio data of the current display of controlled terminal.Described shared region is the region that on controlled terminal, user needs control terminal to control, and the demand sharing certain region or whole operating system can be determined according to the demand of user, and the division of shared region can be determined according to the application window size when front opening.When controlled terminal has multiple application program to be opened, in order to prevent the information leakage of other application program, the general audio-frequency information that only can gather an application program.
When shared region is the subregion of controlled terminal, the first sending module 20 can comprise:
First associative cell 201, for being associated with the primary importance parameter of described first window at controlled terminal by the first window of the voice data of controlled terminal, generation voice data, obtains the first association results;
Second associative cell 202, for being associated with the second place information of described Second Window at controlled terminal by the Second Window of the video data of controlled terminal, generation video data, obtains the second association results;
First transmitting element 203, for being sent to control terminal by the first association results and the second association results.
First first associative cell 201 can obtain the application name of described first window by obtaining subelement, then channel component title each in described application name and system sound volume composition manager matched by pairing subelement, which window which shared region is pairing result can be used to characterize produces voice data.The primary importance parameter of first window at controlled terminal can be obtained, finally by association subelement, pairing result is associated with primary importance parameter.The quantity of described first window can be one or more.Association subelement can according to described pairing result, for the voice data in each shared region arranges audio signature position; Wherein, described audio signature position is for characterizing in corresponding shared region whether there is voice data.If there is voice data in corresponding shared region, then for the voice data in this shared region arranges audio signature position; If there is not voice data in corresponding shared region, then the audio signature position of the voice data in this shared region is empty.Can described audio signature position be added in primary importance parameter, forming region audio frequency/location parameter collection; Wherein, described area audio/location parameter collection is for setting up the one-to-one relationship between program window in each shared region and the position of described application window.Finally, according to described area audio/location parameter collection, pairing result can be associated with primary importance parameter.By setting up voice data, incidence relation between first window and primary importance parameter, user can be made to get each voice data of window broadcasting and the position of each window of controlled terminal exactly at control terminal, be convenient to user and voice data is controlled.
First second associative cell 202 can capture the video data of the current display of controlled terminal, then coding video data is compressed, and the video data after compression and second place parameter correlation are joined, form video/location parameter collection, be sent to control terminal by network.Wherein, described second place Parametric Representation just at the window of displaying video in the position of controlled terminal, described video/location parameter collection for the window of the window and described displaying video of setting up displaying video in each shared region position between one-to-one relationship.
In one embodiment, also mix module can be comprised, for carrying out stereo process to described voice data.Mix module can be connected with pairing subelement, for after channel component title each in described application name and system sound volume composition manager being matched, carries out described stereo process.By stereo process, the sound of multiple application programs intercepted can be combined, cacophonize when preventing the audio frequency of multiple application program from simultaneously broadcasting and be difficult to not hear.When carrying out stereo process, the frequency of original audio data, tonequality etc. can be adjusted separately, make track optimization, be finally superimposed.
In one embodiment, also can comprising compression module, for carrying out audio compression process to the voice data after stereo process, then the voice data after compression being associated with primary importance parameter.
When shared region is the Zone Full of controlled terminal, the first sending module 20 can comprise:
Detecting unit 204, for detecting the voice data of whole controlled terminal;
3rd associative cell 205, for associating the default value of the voice data of whole controlled terminal with tape identification position; Wherein, the default value of described band flag bit is for judging that shared region is the subregion of controlled terminal or the Zone Full of controlled terminal;
Second transmitting element 206, for being sent to control terminal by described association results.
The voice data of whole controlled terminal can be regarded as whole sound that controlled terminal that user can hear sends, i.e. the sound of current all application programs opened and system sounds.
When shared region is the Zone Full of controlled terminal, the 3rd associative cell 205 can the default value of generating strap flag, and this flag bit is used for critical region to be shared or whole operation systems share, such as, can the default value of another described tape identification position be isAllShare.As isAllShare=true, represent that shared region is the Zone Full of controlled terminal; As isAllShare=false, represent that shared region is the subregion of controlled terminal.
Receiver module 30, presents to user for the voice data that received by control terminal and video data, and receives at control terminal the first control command that user sends controlled terminal according to described voice data and video data;
The voice data received and video data can be presented to user by control terminal, and receive by receiver module 30 the first control command that user sends described voice data and video data.Such as, when control terminal is computer, described first control command can be mouse action instruction or keyboard operation instruction, as: click the mouse or the operation such as keyboard data input.When control terminal be mobile phone or panel computer time, described first control command can be the gesture operation of user.Described first control command can be associated with the 3rd positional information, and the primary importance information after association is sent to controlled terminal.Wherein, described 3rd positional information is for representing the position of described first control command effect.In this way, controlled terminal can be made to get the active position of the first control command exactly, thus exactly the relevant position of controlled terminal is controlled.
In one embodiment, before described first control command is sent to controlled terminal from control terminal, also described voice data, video data and the first control command are synchronously processed at controlled terminal by synchronization module.Synchronous realization can judge according to the timestamp intercepting audio frequency and video, and namely identical time stamp is considered as synchronously.
Second sending module 40, for described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled;
3rd sending module 50, for control result is sent to control terminal from controlled terminal, presents to user by control terminal by control result.
At controlled terminal, can process accordingly the voice data received, video data and the first control command.Such as, when control terminal has carried out encoding compression processing to voice data or video data, controlled terminal can be decoded process accordingly.Controlled terminal also can according to relevant operation transformation agreement, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled, and control result can be sent to control terminal from controlled terminal, by control terminal, control result is presented to user.
In one embodiment, take control terminal as mobile terminal, controlled terminal is PC (PersonalComputer, PC) end is example, conveniently introduces, and this technical scheme is divided into PC to hold and two, mobile terminal part:
PC end is on the one hand for obtaining ordinary PC desktop video and associated audio data coding sends to mobile terminal by network, on the other hand for receiving the video data decoding display of mobile terminal, and by obtaining current mouse operation information, send relevant control information to mobile terminal;
Equally, mobile terminal on the one hand for receive mobile terminal video data after decoding display, send relevant control information by the gesture operation information obtaining active user and hold to PC; PC is sent to hold for the video data information obtaining mobile terminal on the other hand.
According to above-mentioned to PC end and the function introduction of mobile terminal, the software architecture of PC end and mobile terminal can be divided into two aspects: desktop video display module and desktop video sending module.The process framework of desktop video sending module as shown in Figure 3.
Audio detection module: when only needing certain region of shared system, audio detection is mainly used for obtaining each channel component title in one or more application name and system sound volume composition manager and matches, finally obtain matching result (which program window of which shared region has audio plays), and matching result and location parameter are delivered to respectively audio frequency handling module, pairing decision-making module.When the shared whole operating system of needs, audio detection module exports the default value of band flag bit.
Audio frequency handling module: if audio detection module has the output of matching result and location parameter, then according to audio detection pairing result, capture after having the voice data of sound broadcast window, send into mix module.If the default value for band flag bit that audio detection module exports, then can capture the output volume of whole system, directly be sent to audio compression module.
Operation detection module: for detecting the operation (as mouse action, input through keyboard etc.) of active user, and Output rusults is sent to pairing decision-making module.
Desktop screen capture module, after the whole desktop video image of crawl, delivers to video compressing module.
Mix module, carries out to the voice data play more than 1 window in each shared region the sound that stereo process obtains shared region broadcasting, then delivers to audio compression module.
Audio compression module, after carrying out encoding compression processing, delivers to pairing decision-making module to one or more groups voice data.
Video compressing module, compresses desktop video image, and the desktop video data after compression deliver to pairing decision-making module.
Pairing decision-making module:
Whether there is audio frequency according to audio detection matching result to each shared region to play, increase audio signature position at zone position parameter and carry out identifying (exist and then identify, do not exist then for empty), forming region audio frequency/location parameter collection;
After being bound with corresponding area audio/location parameter collection by the voice data often organizing shared region broadcasting, the difference of watching object (shared region) according to targeted customer, sends one or more groups audio frequency parameter collection data by network transceiving module;
According to the parameter (as mouse coordinates parameter, keyboard data input) that operation detection provides, increase operation indications and operating parameter at zone position parameter, forming region operation/location parameter collection.
To watch the difference of object (shared region) according to targeted customer, by desktop video coded data and one or more binding positions, forming region video/location parameter collection, is sent by network transceiving module.
Desktop video display module primary responsibility receives the video data that Internet Transmission is come, and is finally presented at the image display area of application program:
Fig. 4 gives the framework of desktop video display module:
Network transceiving module: receive the Internet Transmission video data of coming, and output to type parsing module and process; Be responsible for the data that transmission controls modular converter output simultaneously.
Type parsing module: resolve network packet, if the parameter set data of video/band of position, delivers to Video decoding module; If audio frequency/band of position parameter set data, delivers to audio decoder module; If operation/band of position parameter set data, then deliver to operation analysis module dissection process.
Video decoding module: video data is obtained to video data encoder decoding, then video data is delivered to and control and video matching module.
Audio decoder module: encode audio data decode obtains voice data, delivers to and controls and audio frequency and video matching module
Operation is resolved: be responsible for parse operation order and change into corresponding in-local instruction delivering to control and audio frequency and video matching module.
Control and audio frequency and video matching module: the data message exported according to submodule above, simultaneously match process is carried out to audio frequency, video and operational order, is sent to desktop display module.
Desktop display module: the broadcasting target selected according to user, plays corresponding target sound video.And be responsible for obtaining the mouse of user or gesture operation order parameter (as click, double-click etc.).
Control modular converter: for conversion operations order parameter, form operation/location parameter collection.
Below in conjunction with an Application Example, technical scheme of the present invention is described.
For the system framework of UniversalMedia, the system framework of UniversalMedia as shown in Figure 5.UniversalMedia system can be roughly divided into three parts: desktop display end, desktop projection end and mobile terminal; Desktop display end is responsible for the video information that reception Internet Transmission is come, and is shown by desktop video; Desktop projection end is responsible for the information such as desktop video, audio frequency gathering the machine, and is sent to desktop display end by network.In the PC end introduced above herein, desktop display end and desktop projection end can be deployed on different PCs respectively; In the enforcement of mobile terminal, then be integrated with desktop and show/project two functional modules, when the desktop video that mobile terminal needs viewing desktop projection end to send over, click " viewing " button to switch, same when mobile terminal needs projection table plane video to desktop display end, only needing to click " sharing " button can realize.
The mutual facies-controlled method of intelligent terminal of the present invention can realize the mutual control between the terminal of two kinds of different operating systems, such as, the cross complaint can crossed between mobile intelligent terminal and computer is interactive, make mobile intelligent terminal can very convenient operational administrative computer efficiently, realize the real time inspection to intelligent domestic system and supervision; Simultaneously also for the information sharing of convention provides convenience: the restriction that can be subject to geographical position by mobile intelligent terminal whenever and wherever possible, by the project content of the clear viewing conference speed person in real time of network.
The mutual facies-controlled system of intelligent terminal of the present invention and the mutual facies-controlled method one_to_one corresponding of intelligent terminal of the present invention, the technical characteristic of setting forth in the embodiment of the mutual facies-controlled method of above-mentioned intelligent terminal and beneficial effect thereof are all applicable to, in the embodiment of the mutual facies-controlled system of intelligent terminal, hereby state.
Each technical characteristic of the above embodiment can combine arbitrarily, for making description succinct, the all possible combination of each technical characteristic in above-described embodiment is not all described, but, as long as the combination of these technical characteristics does not exist contradiction, be all considered to be the scope that this specification is recorded.
The above embodiment only have expressed several execution mode of the present invention, and it describes comparatively concrete and detailed, but can not therefore be construed as limiting the scope of the patent.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection range of patent of the present invention should be as the criterion with claims.

Claims (10)

1. the mutual facies-controlled method of intelligent terminal, is characterized in that, comprise the following steps:
Set up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
By described connection, the voice data of controlled terminal and video data are sent to control terminal;
The voice data received by control terminal and video data present to user, and in the first control command that control terminal reception user sends controlled terminal according to described voice data and video data;
Described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled;
Control result is sent to control terminal from controlled terminal, by control terminal, control result is presented to user.
2. the mutual facies-controlled method of intelligent terminal according to claim 1, is characterized in that, the step that the voice data of controlled terminal and video data are sent to control terminal is comprised:
According to the shared region that user on user's request determination controlled terminal needs control terminal to control;
Voice data in shared region and video data are sent to control terminal.
3. the mutual facies-controlled method of intelligent terminal according to claim 2, is characterized in that, if shared region is the subregion of controlled terminal, described step voice data in shared region and video data being sent to control terminal comprises:
The first window of the voice data of controlled terminal, generation voice data is associated with the primary importance parameter of described first window at controlled terminal, obtains the first association results;
The Second Window of the video data of controlled terminal, generation video data is associated with the second place information of described Second Window at controlled terminal, obtains the second association results;
First association results and the second association results are sent to control terminal.
4. the mutual facies-controlled method of intelligent terminal according to claim 3, is characterized in that, is comprised by the first window of the voice data of controlled terminal, generation voice data with the step that described first window carries out associating in the primary importance parameter of controlled terminal:
Obtain the application name of first window, and channel component title each in described application name and system sound volume composition manager is matched;
Obtain the primary importance parameter of first window at controlled terminal;
Pairing result is associated with primary importance parameter.
5. the mutual facies-controlled method of intelligent terminal according to claim 2, is characterized in that, if shared region is the Zone Full of controlled terminal, the step that the voice data of controlled terminal is sent to control terminal is comprised:
Detect the voice data of whole controlled terminal;
The default value of the voice data of whole controlled terminal with tape identification position is associated; Wherein, the default value of described band flag bit is for judging that shared region is the subregion of controlled terminal or the Zone Full of controlled terminal;
Described association results is sent to control terminal.
6. the mutual facies-controlled method of intelligent terminal according to claim 4, is characterized in that, pairing result is comprised with the step that primary importance parameter carries out associating:
According to described pairing result, for the voice data in each shared region arranges audio signature position; Wherein, described audio signature position is for characterizing in respective regions whether there is voice data;
Described audio signature position is added in primary importance parameter, forming region audio frequency/location parameter collection; Wherein, described area audio/location parameter collection is for setting up the one-to-one relationship between program window in each shared region and the position of described application window;
According to described area audio/location parameter collection, pairing result is associated with primary importance parameter.
7. the mutual facies-controlled method of intelligent terminal according to claim 1, is characterized in that, before controlling the voice data of controlled terminal and video data, further comprising the steps of:
At control terminal, described voice data, video data and the second control command are synchronously processed.
8. the mutual facies-controlled method of intelligent terminal according to claim 1, is characterized in that, the step that described first control command is sent to controlled terminal from control terminal is comprised:
Described first control command is associated with the 3rd positional information; Wherein, described 3rd positional information is for representing the position of described first control command effect;
Primary importance information after association is sent to controlled terminal.
9. the mutual facies-controlled system of intelligent terminal, is characterized in that, comprising:
Set up module, for setting up the connection of control terminal and controlled terminal; Wherein, described control terminal and controlled terminal are the terminals that operating system is different;
First sending module, for being sent to control terminal by described connection by the voice data of controlled terminal and video data;
Receiver module, presents to user for the voice data that received by control terminal and video data, and receives at control terminal the first control command that user sends controlled terminal according to described voice data and video data;
Second sending module, for described first control command is sent to controlled terminal from control terminal, and by controlled terminal, described first control command is converted to discernible second control command of controlled terminal, the voice data of controlled terminal and video data are controlled;
3rd sending module, for control result is sent to control terminal from controlled terminal, presents to user by control terminal by control result.
10. the mutual facies-controlled system of intelligent terminal according to claim 9, is characterized in that, described first sending module comprises:
Determining unit, for the shared region needing control terminal to control according to user on user's request determination controlled terminal;
Transmitting element, for being sent to control terminal by the voice data in shared region and video data.
CN201510745400.XA 2015-11-03 2015-11-03 The mutual facies-controlled method and system of intelligent terminal Active CN105430483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510745400.XA CN105430483B (en) 2015-11-03 2015-11-03 The mutual facies-controlled method and system of intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510745400.XA CN105430483B (en) 2015-11-03 2015-11-03 The mutual facies-controlled method and system of intelligent terminal

Publications (2)

Publication Number Publication Date
CN105430483A true CN105430483A (en) 2016-03-23
CN105430483B CN105430483B (en) 2018-07-10

Family

ID=55508366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510745400.XA Active CN105430483B (en) 2015-11-03 2015-11-03 The mutual facies-controlled method and system of intelligent terminal

Country Status (1)

Country Link
CN (1) CN105430483B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592576A (en) * 2017-09-15 2018-01-16 威创集团股份有限公司 A kind of video sharing method, system, share end and be shared end

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101771707A (en) * 2010-02-08 2010-07-07 中兴通讯股份有限公司 Method for realizing resource share among terminals, resource processing system and terminals
JP2010177848A (en) * 2009-01-28 2010-08-12 Sharp Corp Television device, pc device, and display system comprising television device and pc device
CN102844736A (en) * 2010-03-02 2012-12-26 诺基亚公司 Method and apparatus for providing media mixing based on user interactions
CN103235708A (en) * 2013-04-17 2013-08-07 东莞宇龙通信科技有限公司 Method and device for controlling interaction between display device and terminal
CN103379221A (en) * 2012-04-23 2013-10-30 Lg电子株式会社 Mobile terminal and controling method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010177848A (en) * 2009-01-28 2010-08-12 Sharp Corp Television device, pc device, and display system comprising television device and pc device
CN101771707A (en) * 2010-02-08 2010-07-07 中兴通讯股份有限公司 Method for realizing resource share among terminals, resource processing system and terminals
CN102844736A (en) * 2010-03-02 2012-12-26 诺基亚公司 Method and apparatus for providing media mixing based on user interactions
CN103379221A (en) * 2012-04-23 2013-10-30 Lg电子株式会社 Mobile terminal and controling method thereof
CN103235708A (en) * 2013-04-17 2013-08-07 东莞宇龙通信科技有限公司 Method and device for controlling interaction between display device and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592576A (en) * 2017-09-15 2018-01-16 威创集团股份有限公司 A kind of video sharing method, system, share end and be shared end

Also Published As

Publication number Publication date
CN105430483B (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN111225230B (en) Management method and related device for network live broadcast data
KR100727072B1 (en) Method and system for providing information which relates in broadcasting
US9232347B2 (en) Apparatus and method for playing music
CN101242510B (en) A playing method and realization device for dynamic audio and video menu
US10425758B2 (en) Apparatus and method for reproducing multi-sound channel contents using DLNA in mobile terminal
CN102611938A (en) Multimode screen moving method and system
CN103546453A (en) Cross-device multimedia playing method and device
CN102833582B (en) Method for searching audio and video resources via voice
CN101141610A (en) Apparatus and method for video mixing and computer readable medium
CN102223581A (en) Video program searching method and video playing terminal
CN105472309B (en) Data transmission method, device and system
CN106227492A (en) Combination and mobile intelligent terminal interconnected method and device
CN108293104A (en) Information processing system, wireless terminal and information processing method
CN105916002A (en) Player multi-window displaying system and method of realizing hard and soft decoding switching
US9521467B2 (en) Method and apparatus for program information exchange and communications system using a program comment instruction
CN105430459A (en) Audio data playing method, audio data playing apparatus and smart television
CN112004146A (en) Audio playing method and system, television and storage medium
CN114071174A (en) Processing method of live content, electronic equipment and readable storage medium
CN105430483A (en) Method and system for mutual control of intelligent terminals
CN104952224A (en) Method for transmitting network configuration information, terminal and household appliances
CN104519390A (en) Information processing method and control equipment and device
CN101546221A (en) Method, device and system for data processing
CN112311491A (en) Multimedia data acquisition method and device, storage medium and electronic equipment
CN104780012A (en) Multifunctional interview and report transmitter, transmission system and data transmission method
CN101242524A (en) Digital wireless visual doorbell built-in control system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant