CN108769261B - Multi-screen interaction system, method and interaction screen equipment - Google Patents
Multi-screen interaction system, method and interaction screen equipment Download PDFInfo
- Publication number
- CN108769261B CN108769261B CN201810722040.5A CN201810722040A CN108769261B CN 108769261 B CN108769261 B CN 108769261B CN 201810722040 A CN201810722040 A CN 201810722040A CN 108769261 B CN108769261 B CN 108769261B
- Authority
- CN
- China
- Prior art keywords
- screen
- audio file
- information content
- mobile terminal
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 120
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000002452 interceptive effect Effects 0.000 claims abstract description 141
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000004891 communication Methods 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 8
- 230000010365 information processing Effects 0.000 claims description 6
- 230000011664 signaling Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Telephonic Communication Services (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a multi-screen interaction system, a multi-screen interaction method and an interactive screen device, wherein the interactive screen device detects gestures of a user through a sensor, and when an identification unit identifies the gestures of the user as multi-screen interaction instructions, a first processing unit selects specific information content according to a predefined selection rule and converts the specific information content into an audio file; and the sound playing unit is used for sending the audio file to the mobile terminal so that the mobile terminal converts the audio file into corresponding information content. According to the invention, under the condition that the gesture meets the multi-screen interaction gesture, the interaction screen can acquire the specific information content, and convert the specific information content into the audio file to be sent, so that the mobile terminal of the user can share the specific information content, and the interesting content of the user is shared through the multi-screen interaction server, thereby changing the outdoor media presentation mode.
Description
Technical Field
The invention relates to the field of internet communication, in particular to a multi-screen interaction system, a multi-screen interaction method and interaction screen equipment.
Background
With the continuous popularization of large screen media terminals, various outdoor advertising media have been explosively increased. The information such as advertisement pushing of the crowd is watched on the large-screen media terminal, so that the information pushing mode is quite common. Pushing advertisements through large screen terminals is a common practice for many media, such as by broadcasting text, pictures, voice, video, etc. content in a fixed large screen terminal, a good advertising effect can be achieved.
However, in the application scenario of the prior art, the consumer needs to acquire information on a large screen or static display poster in a public place, and the traditional mode is to record by taking a pen, or take a picture by using a mobile phone, or scan a two-dimensional code on the screen or the poster, so as to acquire information provided by a service party. The information acquisition mode is troublesome and inconvenient for consumers, and the user experience is poor.
Disclosure of Invention
The invention mainly solves the technical problem of providing a multi-screen interaction system, a multi-screen interaction method and an interaction screen device, which can enable consumers to share interesting contents of users through a multi-screen interaction server so as to change the outdoor media presentation mode.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided a multi-screen interactive system, the system comprising: an interactive screen device for: detecting a gesture of a user, and analyzing the detected gesture to obtain a corresponding gesture control instruction; the interactive screen device defines gesture control instructions in advance; when the gesture control instruction is a multi-screen interaction instruction, selecting specific information content according to a predefined selection rule, and converting the specific information content into an audio file; wherein, the selection rule is set corresponding to the gesture control instruction; transmitting the audio file through a sound playing unit; and the mobile terminal is used for receiving the audio file sent by the interactive screen equipment through the sound receiving unit and converting the audio file into corresponding information content.
Wherein the interactive screen device comprises a first processing unit; the first processing unit includes: the conversion module is used for converting the specific information content into a corresponding sound code; the first Fourier transform module is used for carrying out Fourier forward transform on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal; and the audio generation module is used for processing the sound wave signals generated by the first Fourier transform module to generate an audio file.
Wherein the mobile terminal comprises a second processing unit; the second processing unit includes: the analysis module is used for analyzing the received audio file to obtain a corresponding sound wave signal; the second Fourier transform module is used for performing inverse Fourier transform on the sound wave number generated by the analysis module so as to obtain a corresponding sound code; and the information processing module is used for converting the sound code generated by the second Fourier transform module into information content.
And when the mobile terminal is used for obtaining the information content, executing corresponding operation according to the information content.
Wherein, still include: the multi-screen interaction server establishes communication connection with the mobile terminal and the interaction screen equipment through a network; the interactive screen device is further configured to select a specific information ID according to a predefined selection rule when the gesture control instruction is analyzed to be a multi-screen interactive instruction, convert the specific information ID into an audio file, and send the audio file through the sound playing unit; the mobile terminal is also used for receiving the audio file sent by the interactive screen device, converting the audio file into a corresponding information ID and sending the information ID to the multi-screen interactive server; the multi-screen interaction server is also used for inquiring a database to obtain information content corresponding to the specific information ID; and transmitting the determined information content to the mobile terminal; the database pre-stores the identity ID of the interactive screen equipment which establishes communication connection with the multi-screen interactive server and the information content displayed by the same.
Wherein the interactive screen device comprises a first processing unit comprising: the conversion module is used for converting the specific information ID into a corresponding sound code; the first Fourier transform module is used for carrying out Fourier forward transform on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal; and the audio generation module is used for processing the sound wave signals generated by the first Fourier transform module to generate the audio file.
Wherein the mobile terminal comprises a second processing unit comprising: the analysis module is used for analyzing the received audio file to obtain a corresponding sound wave signal; the second Fourier transform module is used for performing inverse Fourier transform on the sound wave signals generated by the analysis module so as to obtain corresponding sound codes; and the information processing module is used for converting the sound code generated by the second Fourier transform module into an information ID.
In order to solve the technical problems, the invention adopts another technical scheme that: provided is a multi-screen interaction method, which comprises the following steps: the interaction screen device detects gestures of a user and analyzes the detected gestures to obtain corresponding gesture control; the interactive screen device defines gesture control instructions in advance; when the gesture control instruction is a multi-screen interaction instruction, the interaction screen equipment selects specific information content according to a predefined selection rule, and converts the specific information content into an audio file; wherein, the selection rule is set corresponding to the gesture control instruction; the interactive screen device sends the audio file through the sound playing unit; and the mobile terminal receives the audio file sent by the interactive screen device and converts the audio file into corresponding information content.
When the gesture control instruction is a multi-screen interaction instruction, the interaction screen device selects specific information content according to a predefined selection rule, and converts the specific information content into an audio file, and the method specifically comprises the following steps: when the gesture control instruction is a multi-screen interaction instruction, the interaction screen device selects specific information content according to a predefined selection rule; converting the specific information content into a corresponding sound code; performing Fourier forward transformation on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal; and processing the sound wave signal generated by the first Fourier transform module to generate an audio file.
The mobile terminal receives the audio file sent by the interactive screen device and converts the audio file into corresponding information content, and specifically comprises the following steps: the mobile terminal receives the audio file sent by the interactive screen device, and analyzes the received audio file to obtain a corresponding sound wave signal; performing inverse Fourier transform on the sound wave number generated by the analysis module so as to obtain a corresponding sound code; the vocoded generated by the second fourier transform module is converted into information content.
When the gesture control instruction is a multi-screen interaction instruction, the method further comprises: the multi-screen interaction server selects specific information ID according to a predefined selection rule, converts the specific information ID into an audio file and sends the audio file; the mobile terminal receives an audio file sent by the interactive screen device, converts the audio file into a corresponding information ID, and sends the information ID to the multi-screen interactive server; the multi-screen interaction server queries a database to obtain information content corresponding to the specific information ID, and sends the determined information content to the mobile terminal; the database pre-stores the identity ID of the interactive screen equipment which establishes communication connection with the multi-screen interactive server and the information content displayed by the same.
The multi-screen interaction server selects a specific information ID according to a predefined selection rule, converts the specific information ID into an audio file, and sends the audio file, and the multi-screen interaction server specifically comprises: the multi-screen interaction server selects specific information ID according to a predefined selection rule; converting the specific information ID into a corresponding sound code; performing Fourier forward transformation on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal; and processing the sound wave signals generated through Fourier forward transformation to generate the audio file, and transmitting the audio file.
The mobile terminal receives an audio file sent by the interactive screen device, converts the audio file into a corresponding information ID, and sends the information ID to the multi-screen interactive server, and the method specifically comprises the following steps: the mobile terminal receives the audio file sent by the interactive screen device, and analyzes the received audio file to obtain a corresponding sound wave signal; performing inverse Fourier transform on the generated sound wave signals to obtain corresponding sound codes; the vocoded generated by the fourier transform is converted into an information ID.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided an interactive screen device, the device comprising: the sensor is used for detecting gestures of a user; the recognition unit is used for recognizing the gesture of the user detected by the sensor so as to analyze and obtain a corresponding gesture control instruction; the interactive screen device defines gesture control instructions in advance; the first processing unit is used for selecting specific information content according to a predefined selection rule when the gesture control instruction analyzed by the identification unit is a multi-screen interaction instruction, and converting the specific information content into an audio file; wherein, the selection rule is set corresponding to the gesture control instruction; and the sound playing unit is used for sending the audio file to the mobile terminal so that the mobile terminal converts the audio file into corresponding information content.
The first processing unit is further configured to select a specific information ID according to a predefined selection rule when the gesture control instruction analyzed by the identification unit is a multi-screen interaction instruction, and convert the specific information ID into an audio file; the sound playing unit is further used for sending the audio file to the mobile terminal, enabling the mobile terminal to convert the audio file into corresponding information ID, and sending the information ID to the multi-screen interaction server to receive information content corresponding to the specific information ID, wherein the information content is obtained by the multi-screen interaction server inquiring a database; the database stores the identity ID of the interactive screen equipment which is in communication connection with the multi-screen interactive server and the corresponding display information content in advance.
According to the multi-screen interaction system, the method and the interaction screen device provided by the embodiment of the invention, when a user operates the interaction screen through gestures, the interaction screen can acquire specific information content and convert the specific information content into audio files to be sent under the condition that the gestures meet the multi-screen interaction gestures, so that the mobile terminal of the user can share the specific information content, and the interesting content of the user is shared through the multi-screen interaction server, so that the outdoor media presentation mode is changed.
Drawings
FIG. 1 is a network architecture diagram of a multi-screen interactive system according to a first embodiment of the present invention;
FIG. 2 is a schematic functional structure of a multi-screen interactive system according to a first embodiment of the present invention;
FIG. 3 is a network architecture diagram of a multi-screen interactive system according to a second embodiment of the present invention;
FIG. 4 is a schematic functional structure of a multi-screen interactive system according to a second embodiment of the present invention;
FIG. 5 is a flowchart of a multi-screen interaction method according to a first embodiment of the present invention;
FIG. 6 is a flow chart of the implementation method of step S31 shown in FIG. 5;
FIG. 7 is a flow chart of the implementation method of step S32 shown in FIG. 5;
FIG. 8 is a flow chart of a multi-screen interaction method according to a second embodiment of the present invention;
FIG. 9 is a flow chart of the implementation method of step S41 shown in FIG. 8;
FIG. 10 is a flow chart of the implementation method of step S42 shown in FIG. 8;
FIG. 11 is a flow chart of a multi-screen interaction method according to a third embodiment of the present invention;
fig. 12 is a flowchart of a multi-screen interaction method according to a fourth embodiment of the present invention.
Detailed Description
The description of prior art terms to which the embodiments of the present invention are directed will be first explained.
Audio frequency: generally referred to as sound, or frequency of sound.
A receiver: the present invention is directed to an audio receiver, such as a microphone, or a bluetooth receiver that receives BLE bluetooth signals.
Acoustic wave signal: audio that can be received and parsed by the sound receiver, the parsed original signal. Acoustic code: and obtaining meaningful data after carrying out Fourier inverse transformation on the sound wave signals.
Internal control instructions: a computer or an instruction signal within a computer device in order to trigger a certain function of the device.
External control signaling: external communication signals established in a common communication protocol may be received by other receiving devices, such as WIFI-based control signaling, bluetooth-based control signaling, iBeacon-based control signaling, or other control signaling transmitted in the form of electromagnetic waves.
Triggering response content: output correspondence made based on intelligent recognition of input content, such as text, sound, voice, image, web page link, service link, control signal, device internal control instruction, or external control signaling, etc.
Acoustic code receiving means: the system can be intelligent equipment, mobile phone APP or special software, hardware, firmware or combination of the intelligent equipment, the mobile phone APP and the special software, the hardware, the firmware or the combination of the intelligent equipment and the special software; the device is used for receiving the audio frequency, analyzing the sound wave signals in the audio frequency, finally obtaining the sound code and generating the trigger response content of the device.
User portraits, user behavior portraits (personas or behavior personas): the user portraits are also called user roles, users are known through user investigation and behavior analysis, the users are distinguished into different types according to the differences of targets, behaviors and views of the users, then typical characteristics are extracted from each type, and descriptions such as names, photos, some demographic factors, scenes and the like are given to form a person prototype which is used as an effective tool for outlining target users and contacting user requirements and design directions.
User ID: the mobile phone user ID can enable the multi-screen interaction server to identify the user, and the user can conduct AI intelligent comparison and calculation through the internal big data to find out information content or instruction which is most suitable for the user, and the information content or instruction is used as triggering response content.
In order to describe the technical content, constructional features, achieved objects and effects of the present invention in detail, the present invention will be described in detail with reference to the accompanying drawings and embodiments.
Fig. 1 is a network architecture diagram of a multi-screen interactive system according to a first embodiment of the present invention. The system 10 comprises an interactive screen arrangement 11 and a mobile terminal 12. In this embodiment, the interactive screen device 11 is a large screen, generally referred to as an indoor large screen, an outdoor large screen, and a host computer thereof, which provide commercial services, and the large screen device 11 may be a host computer or just a UI display terminal device, which is controlled by a central advertisement system. The mobile terminal 12 may be a cell phone, tablet computer, or the like, which emits audio (acoustic signals) or bluetooth signals via a speaker attached to the device or a bluetooth device.
Further, please refer to fig. 2, which is a functional structure diagram of a multi-screen interactive system according to a first embodiment of the present invention.
The interactive screen device 11 comprises a sensor 110, an identification unit 111, a first processing unit 112, a sound playing unit 113.
The mobile terminal 12 includes a sound receiving unit 120, a second processing unit 121.
The sensor 110 of the interactive screen device 11 detects a gesture of a user, and the recognition unit 111 recognizes the gesture of the user detected by the sensor 110 and parses the gesture control according to the recognized gesture. The interactive screen device 11 defines gesture control instructions in advance, and one or more gesture control instructions are defined in advance, and in this embodiment, gestures corresponding to the predefined multi-screen interactive gesture control instructions are: three fingers simultaneously slide down on the screen of the interactive screen device 11, and in other embodiments, the gesture corresponding to the predefined multi-screen interactive gesture control instruction may be another gesture.
When the gesture control instruction analyzed by the recognition unit 111 is a multi-screen interaction instruction, the first processing unit 112 selects a specific information content according to a predefined selection rule, converts the specific information content into an audio file, and sends the audio file through the sound playing unit 113. Wherein the audio file may be a sound file, sent through a speaker; the audio file may also be a bluetooth signal, sent over bluetooth.
In particular, the selection rule is set corresponding to the gesture control instruction, and in this embodiment, the selection rule is: when the gesture control instruction is a multi-screen interaction instruction, the currently displayed content is taken as the specific information content, and further, the currently displayed content and the webpage link address corresponding to the content can be taken as the specific information content, so that the invention is not limited to the specific information content.
Further, the first processing unit 112 obtains the specific information content locally from the interactive screen device 11, and may access other servers through a network to obtain the specific information content.
Further, the first processing unit 112 includes:
a conversion module 1120, configured to convert the specific information content into a corresponding sound code;
the first fourier transform module 1121 is configured to perform fourier forward transform on the acoustic code generated by the transform module 1120, thereby generating a corresponding acoustic wave signal.
The audio generating module 1122 is configured to process the acoustic wave signal generated by the first fourier transform module 1121 to generate an audio file.
The sound receiving unit 120 of the mobile terminal 12 receives the audio file transmitted by the interactive screen device 11, and the second processing unit 121 converts the audio file into corresponding information content.
Further, the second processing unit 121 includes:
the parsing module 1210 is configured to parse the audio file received by the sound receiving unit 120 to obtain a corresponding sound wave signal;
a second fourier transform module 1211, configured to perform inverse fourier transform on the acoustic signal generated by the parsing module 1210, so as to obtain a corresponding acoustic code;
an information processing module 1212 for converting the vocoded generated by the second fourier transform module 1211 into information content.
Further, when the mobile terminal 12 obtains the information content, the corresponding operations may be performed according to the information content:
when the received information content is text, sound, voice or image, the mobile terminal 12 can directly display or play the information content, so that the interactive screen device 11 can directly share the picture or video file being displayed to the mobile terminal 12 for multi-screen display interaction.
When the received information content is a web page link or a service link, the mobile terminal 12 may access the web page link/service link to obtain the page content corresponding to the web page link/service link. For example, the interactive screen device 11 shares the commodity picture being displayed to the mobile terminal 12 and also shares the purchase page link of the commodity to the mobile terminal 12, so that the mobile terminal 12 can obtain further associated content while sharing the screen content.
Fig. 3 is a network architecture diagram of a multi-screen interactive system according to a second embodiment of the present invention. The system 20 comprises an interactive screen device 21, a mobile terminal 22 and a multi-screen interactive server 23. The multi-screen interaction server 23 establishes communication connection with the interaction screen device 21 and the mobile terminal 22 through a communication network. Also, the multi-screen interactive server 23 may establish a communication connection with one or more interactive screen devices 21 through a network, and may also establish a communication connection with one or more mobile terminals 22 through a network.
Specifically, the interactive screen device 21 can perform data transmission and command interaction with the multi-screen interactive server 23 through corresponding interfaces, which are the same as those in the prior art, and will not be described herein.
The multi-screen interaction server 23 can be a server or a combination of multiple servers, and is responsible for coordinating different screen signals, and the response content is triggered through an internal control instruction or an external control signaling to achieve the multi-screen interaction effect.
Further, please refer to fig. 4, which is a functional structure diagram of a multi-screen interactive system according to a second embodiment of the present invention.
The interactive screen device 21 comprises a sensor 210, a recognition unit 211, a first processing unit 212, a sound playing unit 213.
The mobile terminal 22 includes a sound receiving unit 220 and a second processing unit 221.
The multi-screen interaction server 23 comprises a matching unit 230, a pushing unit 231 and a database 232.
When the gesture control instruction analyzed by the first processing unit 212 is a multi-screen interaction instruction, the first processing unit 212 selects a specific information ID according to a predefined selection rule, converts the specific information ID into an audio file, and sends the audio file through the sound playing unit 213.
The selection rule is set corresponding to the gesture control instruction, and in this embodiment, the selection rule is: when the gesture control instruction is a multi-screen interaction instruction, the currently displayed content is used as the specific information ID, and further, the currently displayed content and the identity ID of the interaction screen device 21 may be used as the specific information ID, which is not limited to this embodiment.
Further, the first processing unit 212 includes:
a conversion module 2120 for converting the specific information ID into a corresponding sound code;
the first fourier transform module 2121 is configured to perform fourier forward transform on the acoustic code generated by the transform module 2120, so as to generate a corresponding acoustic wave signal.
An audio generation module 2122 for processing the acoustic wave signal generated by the first fourier transform module 2121 to generate an audio file.
The sound receiving unit 220 of the mobile terminal 22 receives the audio file transmitted from the interactive screen device 21, and the second processing unit 221 converts the audio file into a corresponding information ID and transmits it to the multi-screen interactive server 23.
Further, the second processing unit 221 includes:
the parsing module 2210 is configured to parse the audio file received by the sound receiving unit 220 to obtain a corresponding sound wave signal;
the second fourier transform module 2211 is configured to perform inverse fourier transform on the acoustic signal generated by the analysis module 2210, so as to obtain a corresponding acoustic code;
the information processing module 2212 is configured to convert the acoustic code generated by the second fourier transform module 2211 into an information ID.
When the multi-screen interaction server 23 receives the specific information ID transmitted from the mobile terminal 22, the matching unit 230 queries the database 232 to obtain the information content corresponding to the specific information ID. The database 232 stores the identity IDs of the interactive screen devices 21 that establish communication connection with the multi-screen interactive server 23, the information content to be displayed, and the association relationship between each interactive screen device 21 and the information content to be displayed, that is, the multi-screen interactive server 23 uniformly arranges and distributes the information content to be displayed by each interactive screen device 21.
The pushing unit 231 is configured to send the determined information content to the mobile terminal 22, and when the mobile terminal 22 obtains the information content, a corresponding operation may be performed according to the information content.
Therefore, when a user operates the interactive screen through gestures, the interactive screen can acquire specific information content and convert the specific information content into audio files to be sent under the condition that the gestures meet multi-screen interactive gestures, so that the mobile terminal of the user can share the specific information content, and interesting content of the user is shared through the multi-screen interactive server, so that the outdoor media presentation mode is changed.
Further, the mobile terminal 22 runs an APP to run the second processing unit 221 to implement the corresponding function. Specifically, the user starts the APP of the mobile terminal 22, and receives and recognizes specific information contents transmitted in the form of audio files by the interactive screen device 21 when interacting with the interactive screen device 21 through gesture operations.
Further, the interactive screen device 21 is a device having a touch screen capable of recognizing an operation gesture of a user, such that the first processing unit 212 recognizes the operation gesture.
In particular, if the sensor 210 of an interactive screen device 21 does not detect a touch gesture, the interactive screen device 21 performs selective information playback according to the played history. For example, the information having the greatest play frequency before playing the interactive screen device 21, or the designated information, etc.
Fig. 5 is a flow chart of a multi-screen interaction method according to a first embodiment of the invention. The method is applied to the multi-screen interaction system, and specifically comprises the following steps:
step S30, the interactive screen device identifies the detected gesture of the user, and analyzes gesture control according to the identified gesture.
The interactive screen device defines gesture control instructions in advance.
Further, the number of the predefined gesture control commands is one or more, and in this embodiment, the gestures corresponding to the predefined multi-screen interactive gesture control commands are: three fingers slide downwards on the screen of the interactive screen device at the same time, and in other embodiments, the gesture corresponding to the predefined multi-screen interactive gesture control instruction may also be other gestures.
Step S31, when the analyzed gesture control instruction is a multi-screen interaction instruction, selecting specific information content according to a predefined selection rule, converting the specific information content into an audio file, and sending the audio file.
Referring to fig. 6, the specific information content is converted into an audio file by the following steps:
step S311, converting the specific information content into a corresponding sound code;
step S312, fourier transform is performed on the generated acoustic codes, thereby generating corresponding acoustic wave signals.
In step S313, the acoustic wave signal generated through the fourier forward transform processing is processed to generate an audio file.
Step S32, the mobile terminal receives the audio sent by the interactive screen and converts the audio file into corresponding information content.
Referring to fig. 7, the audio file is converted into corresponding information content, specifically through the image steps:
step S321, analyzing the received audio file to obtain a corresponding sound wave signal;
step S322, performing inverse Fourier transform on the generated sound wave signals to obtain corresponding sound codes;
step S323, the generated vocoded by the inverse fourier transform is converted into information content.
In one development, the method further comprises:
the mobile terminal executes corresponding operations according to the acquired information content:
when the received information content is text, sound, voice and image, the mobile terminal can directly display or play the information content, so that the interactive screen equipment can directly share the pictures and video files which are being displayed to the mobile terminal, and multi-screen display interaction is performed.
When the received information content is a web page link or a service link, the mobile terminal can obtain the page content corresponding to the web page link or the service link by accessing the web page link or the service link. For example, the interactive screen device shares the commodity picture being displayed to the mobile terminal and also shares the purchasing page link of the commodity to the mobile terminal, so that the mobile terminal can share the screen content and obtain further associated content.
Referring to fig. 8, a flow chart of a multi-screen interaction method according to a second embodiment of the invention is shown, the method includes:
step S40, the interactive screen device identifies the detected gesture of the user, and analyzes gesture control according to the identified gesture.
The interactive screen device defines gesture control instructions in advance.
Step S41, when the analyzed gesture control instruction is a multi-screen interaction instruction, selecting a specific information ID according to a predefined selection rule, converting the specific information ID into an audio file, and sending the audio file.
The selection rule is set corresponding to the gesture control instruction, and in this embodiment, the selection rule is: and when the gesture control instruction is a multi-screen interaction instruction, taking the currently displayed content as the specific information ID.
Referring to fig. 9, the specific information ID is converted into an audio file by the following steps:
step S411, converting the specific information ID into a corresponding sound code;
in step S412, the generated acoustic codes are subjected to forward fourier transform, so as to generate corresponding acoustic signals.
In step S413, the acoustic wave signal generated by the fourier forward transform is processed to generate an audio file.
In step S42, the mobile terminal receives the audio file sent by the interactive screen device, converts the audio file into a corresponding information ID, and sends the information ID to the multi-screen interactive server.
Referring to fig. 10, the audio file is converted into a corresponding information ID, which is implemented by the following steps:
step S421, analyzing the received audio file to obtain a corresponding sound wave signal;
step S422, performing Fourier inverse transformation on the sound wave signals generated by analysis, so as to obtain corresponding sound codes;
step S423, the generated acoustic codes after the fourier transform are converted into the corresponding user IDs.
In step S43, when the multi-screen interaction server receives the specific information ID sent by the mobile terminal, the database is queried to obtain the information content corresponding to the specific information ID.
The database stores in advance the identity IDs of the interactive screen devices that establish communication connection with the multi-screen interactive server, the information content to be displayed, and the association relationship between each interactive screen device and the information content to be displayed, that is, the multi-screen interactive server uniformly arranges and distributes the information content to be displayed by each interactive screen device 21.
And step S44, the multi-screen interaction server sends the determined information content to the mobile terminal.
Further, when the mobile terminal obtains the information content, a corresponding operation may be performed according to the information content.
Fig. 11 is a flow chart of a multi-screen interaction method according to a third embodiment of the invention. The method is applied to the interactive screen device, and specifically comprises the following steps:
step S50, the interactive screen device identifies the detected gesture of the user, and analyzes gesture control according to the identified gesture.
The interactive screen device defines gesture control instructions in advance.
Further, the number of the predefined gesture control commands is one or more, and in this embodiment, the gestures corresponding to the predefined multi-screen interactive gesture control commands are: three fingers slide downwards on the screen of the interactive screen device at the same time, and in other embodiments, the gesture corresponding to the predefined multi-screen interactive gesture control instruction may also be other gestures.
Step S51, when the analyzed gesture control instruction is a multi-screen interaction instruction, selecting specific information content according to a predefined selection rule, and converting the specific information content into an audio file.
Step S52, the audio file is sent to the mobile terminal, so that the mobile terminal converts the audio file into corresponding information content.
In one development, the method further comprises: and the mobile terminal executes corresponding operation according to the acquired information content.
Fig. 12 is a flowchart of a multi-screen interaction method according to a fourth embodiment of the invention. The method is applied to the interactive screen device, and specifically comprises the following steps:
step S60, the interactive screen device identifies the detected gesture of the user, and analyzes gesture control according to the identified gesture.
The interactive screen device defines gesture control instructions in advance.
Step S61, when the analyzed gesture control instruction is a multi-screen interaction instruction, selecting a specific information ID according to a predefined selection rule, and converting the specific information ID into an audio file.
The selection rule is set corresponding to the gesture control instruction, and in this embodiment, the selection rule is: and when the gesture control instruction is a multi-screen interaction instruction, taking the currently displayed content as the specific information ID.
Step S62, the audio file is sent to the mobile terminal, so that the mobile terminal converts the audio file into a corresponding information ID, and sends the information ID to the multi-screen interaction server to receive the information content corresponding to the specific information ID, which is obtained by the multi-screen interaction server inquiring the database.
The database stores in advance the identity IDs of the interactive screen devices that establish communication connection with the multi-screen interactive server, the information content to be displayed, and the association relationship between each interactive screen device and the information content to be displayed, that is, the multi-screen interactive server uniformly arranges and distributes the information content to be displayed by each interactive screen device 21.
According to the multi-screen interaction system, the method and the interaction screen device provided by the embodiment of the invention, when a user operates the interaction screen through gestures, the interaction screen can acquire specific information content and convert the specific information content into audio files to be sent under the condition that the gestures meet the multi-screen interaction gestures, so that the mobile terminal of the user can share the specific information content, and the interesting content of the user is shared through the multi-screen interaction server, so that the outdoor media presentation mode is changed.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection may be through some interfaces, devices or units, and may be electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. With such understanding, all or part of the technical solution of the present invention may be embodied in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a management server, or a network device, etc.) or a processor to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (abbreviated as ROM), a random access memory (abbreviated as Random Access Memory), a magnetic disk or an optical disk, or the like, which can store a program code.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.
Claims (14)
1. A multi-screen interactive system, the system comprising:
an interactive screen device comprising:
the sensor is used for detecting gestures of a user;
the recognition unit is used for analyzing the detected gesture to obtain a corresponding gesture control instruction; the interactive screen device defines gesture control instructions in advance;
a first processing unit for:
when the gesture control instruction analyzed by the identification unit is a multi-screen interaction instruction, selecting specific information content according to a predefined selection rule, converting the specific information content into an audio file, and sending the audio file through a sound playing unit; wherein, the selection rule is set corresponding to the gesture control instruction; the selection rule is as follows: when the gesture control instruction is a multi-screen interaction instruction, the currently displayed content and/or a webpage link address corresponding to the content are used as specific information content;
the specific information content is obtained from the interactive screen equipment locally or through network access other servers;
the mobile terminal is used for receiving the audio file sent by the interactive screen device through the sound receiving unit, converting the audio file into corresponding information content and executing corresponding operation according to the information content; wherein, the performing the corresponding operation includes:
When the received information content is text, sound, voice and image, the mobile terminal directly displays or plays the information content so as to directly share the picture and video file which are being displayed by the interactive screen device to the mobile terminal for multi-screen display interaction;
when the received information content is a webpage link and a service link, the mobile terminal obtains the webpage content corresponding to the webpage link and the service link by accessing the webpage link and the service link, so that the commodity picture which is being displayed by the interactive screen device is shared to the mobile terminal, and the mobile terminal obtains the associated content through the webpage link and the service link.
2. The multi-screen interactive system according to claim 1, wherein said first processing unit comprises:
the conversion module is used for converting the specific information content into a corresponding sound code;
the first Fourier transform module is used for carrying out Fourier forward transform on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal;
and the audio generation module is used for processing the sound wave signals generated by the first Fourier transform module to generate an audio file.
3. The multi-screen interactive system according to claim 2, wherein the mobile terminal comprises a second processing unit;
the second processing unit includes:
the analysis module is used for analyzing the received audio file to obtain a corresponding sound wave signal;
the second Fourier transform module is used for performing inverse Fourier transform on the sound wave number generated by the analysis module so as to obtain a corresponding sound code;
and the information processing module is used for converting the sound code generated by the second Fourier transform module into information content.
4. The multi-screen interactive system according to claim 1, further comprising:
the multi-screen interaction server establishes communication connection with the mobile terminal and the interaction screen equipment through a network;
the first processing unit is further configured to select a specific information ID according to a predefined selection rule when the gesture control instruction is analyzed to be a multi-screen interaction instruction, convert the specific information ID into an audio file, and send the audio file through the sound playing unit;
the mobile terminal is also used for receiving the audio file sent by the interactive screen device, converting the audio file into a corresponding information ID and sending the information ID to the multi-screen interactive server;
The multi-screen interaction server is also used for inquiring a database to obtain information content corresponding to the specific information ID; and transmitting the determined information content to the mobile terminal; the database pre-stores the identity ID of the interactive screen equipment which establishes communication connection with the multi-screen interactive server and the information content displayed by the same.
5. The multi-screen interactive system according to claim 4, wherein said first processing unit comprises:
the conversion module is used for converting the specific information ID into a corresponding sound code;
the first Fourier transform module is used for carrying out Fourier forward transform on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal;
and the audio generation module is used for processing the sound wave signals generated by the first Fourier transform module to generate the audio file.
6. The multi-screen interactive system according to claim 5, wherein the mobile terminal comprises a second processing unit comprising:
the analysis module is used for analyzing the received audio file to obtain a corresponding sound wave signal;
the second Fourier transform module is used for performing inverse Fourier transform on the sound wave signals generated by the analysis module so as to obtain corresponding sound codes;
And the information processing module is used for converting the sound code generated by the second Fourier transform module into an information ID.
7. A multi-screen interaction method, the method comprising:
the interaction screen device detects gestures of a user and analyzes the detected gestures to obtain corresponding gesture control; the interactive screen device defines gesture control instructions in advance;
when the gesture control instruction is a multi-screen interaction instruction, the interaction screen equipment selects specific information content according to a predefined selection rule, and converts the specific information content into an audio file; wherein, the selection rule is set corresponding to the gesture control instruction; the selection rule is as follows: when the gesture control instruction is a multi-screen interaction instruction, the currently displayed content and/or a webpage link address corresponding to the content are used as specific information content;
the interactive screen device sends the audio file through the sound playing unit;
the specific information content is obtained from the interactive screen equipment locally or through network access other servers;
the mobile terminal receives an audio file sent by the interactive screen device, converts the audio file into corresponding information content, and executes corresponding operation according to the information content; wherein, the performing the corresponding operation includes:
When the received information content is text, sound, voice and image, the mobile terminal directly displays or plays the information content so as to directly share the picture and video file which are being displayed by the interactive screen device to the mobile terminal for multi-screen display interaction;
when the received information content is a webpage link and a service link, the mobile terminal obtains the webpage content corresponding to the webpage link and the service link by accessing the webpage link and the service link, so that the commodity picture which is being displayed by the interactive screen device is shared to the mobile terminal, and the mobile terminal obtains the associated content through the webpage link and the service link.
8. The multi-screen interaction method according to claim 7, wherein when the gesture control command is a multi-screen interaction command, the interaction screen device selects specific information content according to a predefined selection rule, and converts the specific information content into an audio file, and the method specifically comprises:
when the gesture control instruction is a multi-screen interaction instruction, the interaction screen device selects specific information content according to a predefined selection rule;
Converting the specific information content into a corresponding sound code;
performing Fourier forward transformation on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal;
and processing the sound wave signal generated by the first Fourier transform module to generate an audio file.
9. The multi-screen interaction method according to claim 8, wherein the mobile terminal receives the audio file sent by the interaction screen device and converts the audio file into the corresponding information content, and specifically comprises:
the mobile terminal receives the audio file sent by the interactive screen device, and analyzes the received audio file to obtain a corresponding sound wave signal;
performing inverse Fourier transform on the sound wave number generated by the analysis module so as to obtain a corresponding sound code;
the vocoded generated by the second fourier transform module is converted into information content.
10. The multi-screen interaction method according to claim 7, wherein when the gesture control instruction is a multi-screen interaction instruction, the method further comprises:
the multi-screen interaction server selects specific information ID according to a predefined selection rule, converts the specific information ID into an audio file and sends the audio file;
The mobile terminal receives an audio file sent by the interactive screen device, converts the audio file into a corresponding information ID, and sends the information ID to the multi-screen interactive server;
the multi-screen interaction server queries a database to obtain information content corresponding to the specific information ID, and sends the determined information content to the mobile terminal; the database pre-stores the identity ID of the interactive screen equipment which establishes communication connection with the multi-screen interactive server and the information content displayed by the same.
11. The multi-screen interaction method according to claim 10, wherein the multi-screen interaction server selects a specific information ID according to a predefined selection rule, converts the specific information ID into an audio file, and transmits the audio file, and specifically comprises:
the multi-screen interaction server selects specific information ID according to a predefined selection rule;
converting the specific information ID into a corresponding sound code;
performing Fourier forward transformation on the acoustic code generated by the conversion module so as to generate a corresponding acoustic wave signal;
and processing the sound wave signals generated through Fourier forward transformation to generate the audio file, and transmitting the audio file.
12. The multi-screen interaction method according to claim 11, wherein the mobile terminal receives an audio file transmitted by the interaction screen device, converts the audio file into a corresponding information ID, and transmits the information ID to the multi-screen interaction server, and specifically comprises:
the mobile terminal receives the audio file sent by the interactive screen device, and analyzes the received audio file to obtain a corresponding sound wave signal;
performing inverse Fourier transform on the generated sound wave signals to obtain corresponding sound codes;
the vocoded generated by the fourier transform is converted into an information ID.
13. An interactive screen device, the device comprising:
the sensor is used for detecting gestures of a user;
the recognition unit is used for recognizing the gesture of the user detected by the sensor so as to analyze and obtain a corresponding gesture control instruction; the interactive screen device defines gesture control instructions in advance;
a first processing unit for:
when the gesture control instruction analyzed by the identification unit is a multi-screen interaction instruction, selecting specific information content according to a predefined selection rule, and converting the specific information content into an audio file; wherein, the selection rule is set corresponding to the gesture control instruction; the selection rule is as follows: when the gesture control instruction is a multi-screen interaction instruction, the currently displayed content and/or a webpage link address corresponding to the content are used as specific information content;
The sound playing unit is used for sending the audio file to the mobile terminal, enabling the mobile terminal to convert the audio file into corresponding information content, and executing corresponding operation according to the information content; wherein, the performing the corresponding operation includes:
when the received information content is text, sound, voice and image, the mobile terminal directly displays or plays the information content so as to directly share the picture and video file which are being displayed by the interactive screen device to the mobile terminal for multi-screen display interaction;
when the received information content is a webpage link and a service link, the mobile terminal obtains the webpage content corresponding to the webpage link and the service link by accessing the webpage link and the service link, so that the commodity picture which is being displayed by the interactive screen device is shared to the mobile terminal, and the mobile terminal obtains the associated content through the webpage link and the service link.
14. The interactive screen device according to claim 13, wherein the first processing unit is further configured to select a specific information ID according to a predefined selection rule and convert the specific information ID into an audio file when the gesture control instruction analyzed by the recognition unit is a multi-screen interactive instruction;
The sound playing unit is further used for sending the audio file to the mobile terminal, enabling the mobile terminal to convert the audio file into corresponding information ID, and sending the information ID to the multi-screen interaction server to receive information content corresponding to the specific information ID, wherein the information content is obtained by the multi-screen interaction server inquiring a database; the database stores the identity ID of the interactive screen equipment which is in communication connection with the multi-screen interactive server and the corresponding display information content in advance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810722040.5A CN108769261B (en) | 2018-07-04 | 2018-07-04 | Multi-screen interaction system, method and interaction screen equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810722040.5A CN108769261B (en) | 2018-07-04 | 2018-07-04 | Multi-screen interaction system, method and interaction screen equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108769261A CN108769261A (en) | 2018-11-06 |
CN108769261B true CN108769261B (en) | 2023-11-24 |
Family
ID=63975853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810722040.5A Active CN108769261B (en) | 2018-07-04 | 2018-07-04 | Multi-screen interaction system, method and interaction screen equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108769261B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109991864A (en) * | 2019-03-13 | 2019-07-09 | 佛山市云米电器科技有限公司 | Home automation scenery control system and its control method based on image recognition |
CN111756604B (en) * | 2019-03-29 | 2022-04-12 | 华为技术有限公司 | Equipment coordination method, device and system |
CN111586042A (en) * | 2020-05-07 | 2020-08-25 | 深圳康佳电子科技有限公司 | Multi-screen interaction method, terminal and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203070208U (en) * | 2012-12-04 | 2013-07-17 | 冯时 | Multi-screen interactive system |
CN103379368A (en) * | 2012-04-26 | 2013-10-30 | 安美世纪(北京)科技有限公司 | Multiple-screen integration displaying method and system |
CN104010226A (en) * | 2014-06-17 | 2014-08-27 | 合一网络技术(北京)有限公司 | Multi-terminal interactive playing method and system based on voice frequency |
CN104717549A (en) * | 2013-12-13 | 2015-06-17 | 中国移动通信集团公司 | Multi-screen information interaction method and device |
CN106028129A (en) * | 2016-06-16 | 2016-10-12 | 深圳Tcl数字技术有限公司 | Method and system for sharing television programs among televisions |
CN106980479A (en) * | 2016-01-15 | 2017-07-25 | 优视科技(中国)有限公司 | Method, device and the server of multi-screen interactive |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9990046B2 (en) * | 2014-03-17 | 2018-06-05 | Oblong Industries, Inc. | Visual collaboration interface |
US10331190B2 (en) * | 2016-11-09 | 2019-06-25 | Microsoft Technology Licensing, Llc | Detecting user focus on hinged multi-screen device |
-
2018
- 2018-07-04 CN CN201810722040.5A patent/CN108769261B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103379368A (en) * | 2012-04-26 | 2013-10-30 | 安美世纪(北京)科技有限公司 | Multiple-screen integration displaying method and system |
CN203070208U (en) * | 2012-12-04 | 2013-07-17 | 冯时 | Multi-screen interactive system |
CN104717549A (en) * | 2013-12-13 | 2015-06-17 | 中国移动通信集团公司 | Multi-screen information interaction method and device |
CN104010226A (en) * | 2014-06-17 | 2014-08-27 | 合一网络技术(北京)有限公司 | Multi-terminal interactive playing method and system based on voice frequency |
CN106980479A (en) * | 2016-01-15 | 2017-07-25 | 优视科技(中国)有限公司 | Method, device and the server of multi-screen interactive |
CN106028129A (en) * | 2016-06-16 | 2016-10-12 | 深圳Tcl数字技术有限公司 | Method and system for sharing television programs among televisions |
Non-Patent Citations (2)
Title |
---|
" 基于MSD6A801的Android有线数字电视机顶盒的软件设计与实现";陈春健;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;I136-2045 * |
"HMD-based virtual multi-screen control system and its gesture interface";Sangho Yoon;《2017 International SoC Design Conference》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108769261A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6713034B2 (en) | Smart TV audio interactive feedback method, system and computer program | |
CN108847214B (en) | Voice processing method, client, device, terminal, server and storage medium | |
US10235025B2 (en) | Various systems and methods for expressing an opinion | |
JP6990772B2 (en) | Information push method, storage medium, terminal equipment and server | |
CN108769261B (en) | Multi-screen interaction system, method and interaction screen equipment | |
CN110149549B (en) | Information display method and device | |
CN112653902B (en) | Speaker recognition method and device and electronic equipment | |
CN104065979A (en) | Method for dynamically displaying information related with video content and system thereof | |
CN109448709A (en) | A kind of terminal throws the control method and terminal of screen | |
JP6580132B2 (en) | Method and apparatus for providing information associated with media content | |
CN106131133B (en) | Browsing history record information viewing method, device and system | |
CN105389710A (en) | Method of delivering an advertising message | |
CN103778549A (en) | Mobile application popularizing system and method | |
CN102508646A (en) | Microsoft corp | |
CN102347839A (en) | Content signaturing | |
CN108574878B (en) | Data interaction method and device | |
CN114501103B (en) | Live video-based interaction method, device, equipment and storage medium | |
CN108769262B (en) | Large-screen information pushing system, large-screen equipment and method | |
CN104038774A (en) | Ring tone file generating method and device | |
CN104837046A (en) | Multi-media file processing method and device | |
CN114374853A (en) | Content display method and device, computer equipment and storage medium | |
CN208638380U (en) | A kind of Multi-screen interaction system and interactive screen equipment | |
KR20190101914A (en) | Apparatus and method for streaming video | |
CN106446042B (en) | Information display method and device | |
JP2018106524A (en) | Interactive device, interactive method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |