CN106297804B - Voice interaction method and system of projector - Google Patents

Voice interaction method and system of projector Download PDF

Info

Publication number
CN106297804B
CN106297804B CN201610803995.4A CN201610803995A CN106297804B CN 106297804 B CN106297804 B CN 106297804B CN 201610803995 A CN201610803995 A CN 201610803995A CN 106297804 B CN106297804 B CN 106297804B
Authority
CN
China
Prior art keywords
server
projector
information
index information
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610803995.4A
Other languages
Chinese (zh)
Other versions
CN106297804A (en
Inventor
钟波
肖适
刘志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jimi Technology Co Ltd
Original Assignee
Chengdu Jimi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jimi Technology Co Ltd filed Critical Chengdu Jimi Technology Co Ltd
Priority to CN201610803995.4A priority Critical patent/CN106297804B/en
Publication of CN106297804A publication Critical patent/CN106297804A/en
Application granted granted Critical
Publication of CN106297804B publication Critical patent/CN106297804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a voice interaction method and a voice interaction system for a projector, wherein the method comprises the following steps: the mobile terminal sends voice information to the server; the server stores the voice information and generates index information associated with the voice information; the server sends the index information to a projector; and the projector acquires the voice information from the server according to the index information.

Description

Voice interaction method and system of projector
Technical Field
The invention relates to the field of internet, in particular to a voice interaction method and system of a projector.
Background
In the information interaction process with the projector, sometimes voice interaction is needed to be carried out so as to send voice information to the projector to play or receive the voice information sent from the projector, and the interaction is quicker and more convenient. However, the existing information interaction with the projector is generally characters, pictures and the like, and a voice interaction process is not involved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a voice interaction method and system for a projector, where a mobile terminal sends voice information to a server, and after receiving index information associated with the voice information sent by the server, the projector obtains the voice information from the server through the index information, so as to implement voice interaction between the projector and the mobile terminal.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a method of voice interaction for a projector, the method comprising: the mobile terminal sends voice information to the server; the server stores the voice information and generates index information associated with the voice information; the server sends the index information to a projector; and the projector acquires the voice information from the server according to the index information.
A voice interaction system of a projector comprises a mobile terminal, a server and the projector, wherein the mobile terminal is used for sending voice information to the server; the server is used for storing the voice information and generating index information associated with the voice information; the server is also used for sending the index information to the projector; and the projector is used for acquiring the voice information from the server according to the index information and playing the voice information.
According to the voice interaction method and system for the projector, the mobile terminal sends the voice information to the server, the server stores the received voice information and generates the index information associated with the voice information, the index information is sent to the projector, the projector can acquire the voice information from the server according to the received index information, the voice sending from the mobile terminal to the projector is completed, and the voice interaction between the mobile terminal and the projector is achieved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows an interaction diagram of a server, a mobile terminal and a projector according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a voice interaction method of a projector according to a first embodiment of the present invention;
fig. 3 is a flowchart illustrating a voice interaction method of a projector according to a first embodiment of the present invention;
fig. 4 is a flowchart illustrating a voice interaction method of a projector according to a first embodiment of the present invention;
fig. 5 is a flowchart illustrating a part of the steps of a voice interaction method of a projector according to a first embodiment of the present invention;
fig. 6 is a flowchart illustrating a voice interaction method of a projector according to a second embodiment of the present invention;
fig. 7 is a flowchart illustrating a part of the steps of a voice interaction method of a projector according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic diagram illustrating interaction between a server 100 and a mobile terminal 200 and a projector 300 according to an embodiment of the present invention. The server 100 is communicatively connected with the mobile terminal 200 and the projector 300 through the network 400 for data communication or interaction, and the communication connection established between the projector 300 and the server 100 may be a long connection. The server 100 may be a web server, a database server, an access server, a storage server, or the like. The mobile terminal 200 may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), and the like.
First embodiment
Fig. 2 shows a voice interaction method for a projector according to an embodiment of the present invention, and referring to fig. 2, the method includes:
step S110: the mobile terminal sends voice information to the server.
When voice interaction with the projector 300 is required and voice information is transmitted to the projector 300, a user may input the voice information desired to be transmitted through the mobile terminal 200. Of course, it is understood that the manner in which the user inputs the voice information may be voice entry through a microphone or the like of the mobile terminal 200.
The mobile terminal 200 transmits the received voice information input by the user to the server 100.
Step S120: the server stores the voice information and generates index information associated with the voice information.
After receiving the voice information transmitted by the projector 300, the server 100 stores the voice information and generates index information associated with the stored voice information, where the index information may include a storage location of the voice information.
Step S130: and the server sends the index information to a projector.
The server 100 transmits the generated index information to the projector 300.
Specifically, in this embodiment, the voice message sent by the mobile terminal 200 may include an identification code of the projector 300, where the identification code corresponds to a unique projector 300, and the corresponding projector 300 is the projector 300 that the voice message wants to send. The identification code and the correspondence between the identification code and the projector 300 may be stored in the server 100 in advance, and the server 100 may search the projector 300 corresponding to the identification code according to the identification code of the projector 300 included in the voice information, and send the index information to the searched projector 300.
Further, in the present embodiment, the server 100 may transmit the index information to the corresponding projector 300 again in a case where the mobile terminal 200 has the authority to transmit the voice information to the corresponding projector 300. The authority of the mobile terminal 200 to transmit voice information to the projector 300 may be satisfied in a case where the mobile terminal 200 has a pairing relationship with the projector 300. The pairing of the mobile terminal 200 and the projector 300 may be understood as mutual binding between the mobile terminal 200 and the projector 300, and may be specifically represented as that the mobile terminal 200 and the identification code of the projector 300 to which the voice information is to be transmitted have a pairing relationship, that is, the mobile terminal 200 and the identification code in the transmitted voice information have a corresponding pairing relationship, and it may be understood that the pairing relationship is stored in the server 100.
Then, as shown in fig. 3, before sending the index information to the projector 300 in step S130, step S121 is further included: the server determines whether the mobile terminal and the projector are paired, and if so, executes step S130.
Specifically, the voice message may further include an identity of the mobile terminal 200, and the server 100 may determine whether the mobile terminal 200 and the projector 300 are paired, where the server 100 searches for an identifier code paired with the identity according to the identity of the mobile terminal 200; when the identification code paired with the identity identifier is found, the server 100 determines whether the found identification code is consistent with the identification code in the voice message, and if so, determines that the mobile terminal 200 is paired with the projector 300, and may send the generated index information to the projector 300.
Of course, if the server 100 does not find the identification code paired with the identity identifier or the found identification code is not consistent with the identification code in the voice message, it is determined that the mobile terminal 200 and the projector 300 are not paired, and the index information is not sent to the projector 300. Also, when the mobile terminal 200 is unpaired with the projector 300, the server 100 may feed back unpaired information to the mobile terminal 200 and feed back information that voice information transmission failed.
In this embodiment, the determination of whether the mobile terminal 200 and the projector 300 are paired may be that the steps S120 and the following steps are not executed if the mobile terminal 200 and the projector 300 are not paired before the voice information is stored and the index information is generated in step S120.
In addition, in this embodiment, the projector 300 may be in the on state and then send the index information to the projector 300, and as shown in fig. 4, before step S130, step S122 may be further included: the server 100 determines whether the projector 300 is in the on state, and if so, executes step S130.
When projector 300 is in the on state, server 100 transmits the generated index information to projector 300, and projector 300 should be in the network connection state at this time. If the projector 300 is in the power-off state or in the state of network disconnection with the server, the server 100 does not send the index information to the projector 300, and the condition that the projector 300 is in the power-off state or the network connection is disconnected can be fed back to the mobile terminal 200. In addition, in the case where the index information is not transmitted to projector 300 because projector 300 is in the off state or the network connection is disconnected, server 100 may periodically detect the on state or the network connection state of projector 300, and when projector 300 is in the on state or the network connection state, server 100 transmits the index information to projector 300.
In this embodiment, the determination of whether the projector 300 is in the on state may be that after the determination of whether the mobile terminal 200 and the projector 300 are paired, after determining that the mobile terminal 200 and the projector 300 are paired, it is determined whether the projector 300 is in the on state.
Step S140: and the projector acquires the voice information from the server according to the index information.
After receiving the index information, the projector 300 may acquire the voice information from the server 100 according to the index information, and the acquiring process may be understood as a downloading process. After the projector 300 acquires the voice information, the server 100 may feed back to the mobile terminal 200 a message that the projector 300 successfully receives the voice information.
Since the network environment where the projector 300 is located may be a mobile communication network such as a 3G network and a 4G network or a wireless network such as a WiFi network, and since the 3G network and the 4G network are expensive, if the cost for transmitting the voice information using the 3G network or the 4G network is high, the projector 300 may obtain the voice information from the server 100 according to the index information in the WiFi network.
Therefore, in a specific implementation manner of this embodiment, after receiving the index information, the projector 300 may first detect whether the current network environment is a WiFi network, and if so, execute the step of the projector 300 acquiring the voice information from the server 100 according to the index information in step S140. If the current network environment is the 3G network or the 4G network, the projector 300 does not execute the voice information acquisition in step S140, or the projector 300 reminds the user whether to acquire the voice information in the current mobile network environment, and if so, acquires the voice information from the server 100 according to the index information.
In this embodiment, a specific manner of acquiring the voice information from the server 100 by the projector 300 according to the index information may be as shown in fig. 5:
step S141: and the projector sends a voice acquisition request to the server, wherein the voice acquisition request comprises the index information.
When the projector 300 receives the index information, if the voice information needs to be acquired, a voice acquisition request is sent to the server 100. The corresponding index information received by the projector 300 is included in the voice acquisition request.
Step S142: and the server receives the voice acquisition request and searches the stored voice information according to the index information.
Since the index information is related to the voice information and the storage location of the voice information, the server 100 may find the stored voice information related to the index information according to the index information.
Step S143: and the server sends the searched voice information to the projector.
It is understood that the server 100 transmits the searched voice information to the projector 300 that made the voice acquisition request.
Step S144: the projector receives the voice information.
The projector 300 receives the voice information transmitted from the server 100.
Step S150: and the projector plays the voice information.
The projector 300 plays the obtained voice information, and the voice interaction of the mobile terminal 200 to the projector is realized.
Second embodiment
In this embodiment, the server 100 includes a first server and a second server, the first server 100 may be configured to interact with the projector 300 and the mobile terminal 200, and the second server 100 may be configured to store the voice information, so in this embodiment, as shown in fig. 6, the step of the mobile terminal 200 sending the voice information to the projector 300 through the server 100 may be:
step S210: the mobile terminal 200 transmits voice information to the first server 100.
In the process of transmitting the voice information to the server by the mobile terminal 200, the mobile terminal transmits the voice information to the first server, and the transmitted voice information further includes path information for generating index information, the path information including a transmission path by which the generated index information is transmitted to the projector. In this embodiment, the sending path in the path information may be the first path or the second path, and specifically, the first path or the second path may be selected by the user when sending the voice information, or may be preset.
The first path is that the first server sends the index information to the second server through the mobile terminal and then the second server sends the index information to the projector, and the second path is that the first server directly sends the index information to the second server and then the second server sends the index information to the projector. In this embodiment, when the first server and the second server do not have the direct communication right, the sending path in the path information may be the first path, and certainly, when the first server and the second server have the direct communication right, the sending path in the path information may be the first path or the second path.
Step S220: and the first server stores the voice information, generates index information associated with the voice information, and determines a sending path of the index information according to the path information.
And after receiving the voice information, the first server stores the voice information and generates index information. After the index information is generated, the sending path of the index information is determined according to the path information in the voice information.
Step S230: when the sending path is a first path, the first server sends the index information to the mobile terminal, the mobile terminal sends the index information to the second server, and the second server sends the index information to a projector; and when the sending path is a second path, the first server sends the index information to the second server, and the second server sends the index information to the projector.
And according to different sending paths, the server sends the index information to the projector in different modes.
Specifically, after the sending path is determined to be the first path, the first server sends the index information to the mobile terminal, then the index information is sent to the second server through the mobile terminal, and finally the index information is sent to the projector through the second server.
And when the sending path is determined to be the second path, the first server directly sends the index information to the second server, and the second server sends the index information to the projector after receiving the index information.
Step S240: : and the projector acquires the voice information from the first server according to the index information.
In the step of the projector acquiring the voice information from the server according to the index information, the projector acquires the stored voice information from the first server according to the index information.
Third embodiment
In the voice interaction method of the projector provided in this embodiment, the method may further include sending voice information to the mobile terminal 200 by the projector 300 as shown in fig. 7. Referring to fig. 7, the projector 300 transmitting the voice information to the mobile terminal 200 may include:
step S310: the projector 300 transmits another voice message to the server 100.
Step S320: the server 100 stores the other voice information and generates other index information associated with the other voice information.
Step S330: the server 100 transmits the other index information to the mobile terminal 200.
Step S340: the mobile terminal 200 acquires the other voice information from the server 100 according to the other index information.
It is understood that, in the present embodiment, a method in which the projector 300 transmits another voice message to the mobile terminal 200 through the server 100 is similar to a method in which the mobile terminal 200 transmits a voice message to the projector 300 through the server 100.
In this embodiment, the voice information sent by the projector to the mobile terminal may be recorded through a remote controller of the projector, and of course, the voice information may also be directly recorded by the projector or recorded through other manners. The sending process of the other voice message by the projector can also be carried out under the control of a remote controller, or directly operated by the projector, or carried out by other modes.
Before step S330 in this embodiment, the server 100 may determine whether the projector 300 is paired with the mobile terminal 200 to which the voice information is to be transmitted, and if so, perform the step of transmitting another index information to the mobile terminal 200. When the mobile terminal 200 acquires another voice message, the another voice message may be played.
Further, if the server 100 determines that the projector 300 is not paired with the mobile terminal 200, the step of transmitting another index information to the mobile terminal 200 is not performed, and information of a failure in the transmission of the voice information may be fed back to the projector 300.
When the mobile terminal 200 obtains another index information, the server 100 may feed back a message that the voice information transmission is successful to the projector 300.
Of course, in this embodiment, the mobile terminal 200 may perform the step of acquiring the another voice message from the server 100 according to the another index information only when detecting that the current network environment is the WiFi network.
Further, in this embodiment, the server 100 may also include a first server and a second server, where the first server receives another voice message sent by the projector 300 and stores the another voice message, and generates another index information associated with the another voice message, where the another index information includes a storage location of the another voice message. The other voice information comprises path information of the other index information, and the first server determines a sending path of the other index information according to the path information of the other index information. When the sending path of the other index information is the first path, the first server sends the other index information to the projector, the projector sends the other index information to the second server, and the second server sends the other index information to the mobile terminal, so that the mobile terminal can obtain the other voice information from the first server according to the other index information. When the sending path of the other index information is a second path, the first server directly sends the other index information to the second server, and the second server sends the other index information to the mobile terminal.
Fourth embodiment
The embodiment provides a voice interaction system of a projector, as shown in fig. 1, the system includes a mobile terminal 200, a server 100 and a projector 300, wherein the mobile terminal 200 is configured to send voice information to the server 100; the server 100 is configured to store the voice information and generate index information associated with the voice information; the server 100 is further configured to send the index information to the projector 300; the projector 300 is configured to obtain the voice information from the server 100 according to the index information and play the voice information.
And, the projector 300 is also used to transmit another voice message to the server 100; the server 100 is further configured to store the another voice information and generate another index information associated with the another voice information; the server 100 is further configured to send the other index information to the mobile terminal 200; the mobile terminal 200 is further configured to obtain the other voice message from the server 100 according to the other index information and play the other voice message.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, other, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method of voice interaction for a projector, the method comprising:
the mobile terminal sends voice information to the server; the voice information is input by a user through the mobile terminal;
the server stores the voice information and generates index information associated with the voice information;
the server sends the index information to a projector;
the projector acquires the voice information from the server according to the index information;
the server comprises a first server and a second server, and the method comprises the following steps:
the step of sending voice information to the server by the mobile terminal comprises the following steps: the mobile terminal sends voice information to the first server, wherein the voice information also comprises path information of the index information;
the step of the server storing the voice information and generating index information associated with the voice information includes: the first server stores the voice information, generates index information associated with the voice information, and determines a sending path of the index information according to the path information;
the step of the server sending the index information to the projector comprises:
when the first server and the second server do not have the direct communication right, the sending path is a first path, the first server sends the index information to the mobile terminal, the mobile terminal sends the index information to the second server, and the second server sends the index information to the projector;
when the first server and the second server have direct communication rights, the sending path is a second path, the first server sends the index information to the second server, and the second server sends the index information to the projector;
the step of the projector acquiring the voice information from the server according to the index information comprises the following steps:
the projector acquires the voice information from the first server according to the index information;
the voice information also comprises an identity identification and an identification code of the mobile terminal; before the step of sending the index information to the projector, the server further includes:
the server searches an identification code matched with the identity identification according to the identity identification; when the identification code matched with the identity identification is found, the server judges whether the found identification code is consistent with the identification code in the voice information, if so, the mobile terminal and the projector are matched;
the server sends the index information to a projector, and the method comprises the following steps:
under the condition that the projector is in a power-off state or the network connection is disconnected and the index information is not sent to the projector, the server detects the power-on state or the network connection state of the projector at regular time, and when the projector is in the power-on state or the network connection state, the server sends the index information to the projector;
the method further comprises the following steps:
the projector sends another voice message to the server; the other voice information is recorded through a remote controller of the projector;
the server storing the other voice information and generating other index information associated with the other voice information;
the server sends the other index information to the mobile terminal;
and the mobile terminal acquires the other voice information from the server according to the other index information.
2. The method of claim 1, wherein the step of the projector obtaining the voice information from the server according to the index information is preceded by the step of:
and the projector detects whether the current network environment is a WiFi network, and if so, the projector acquires the voice information from the server according to the index information.
3. The method of claim 1, wherein the step of the projector obtaining the voice information from the server according to the index information comprises:
the projector sends a voice acquisition request to the server, wherein the voice acquisition request comprises the index information;
the server receives the voice acquisition request and searches stored voice information according to the index information;
the server sends the searched voice information to the projector;
the projector receives the voice information.
4. The method of claim 1, wherein after the projector obtains the voice information from the server according to the index information, the method further comprises:
and the projector plays the voice information.
5. The method of claim 1, wherein before the server sends the other index information to the mobile terminal, the method further comprises:
the server determines whether the mobile terminal and the projector are paired,
and if so, the server executes the step of sending the other index information to the mobile terminal.
6. A voice interaction system of a projector is characterized in that the system comprises a mobile terminal, a server and the projector, wherein,
the mobile terminal is used for sending voice information to the server; the voice information is input by a user through the mobile terminal;
the server is used for storing the voice information and generating index information associated with the voice information;
the server is also used for detecting the starting-up state or the network connection state of the projector at regular time under the condition that the projector is in the power-off state or the network connection is disconnected and the index information is not sent to the projector, and sending the index information to the projector when the projector is in the power-on state or the network connection state;
the projector is used for acquiring the voice information from the server according to the index information and playing the voice information;
the server comprises a first server and a second server;
the mobile terminal sends voice information to the first server, wherein the voice information also comprises path information of the index information;
the step of the server storing the voice information and generating index information associated with the voice information includes: the first server stores the voice information, generates index information associated with the voice information, and determines a sending path of the index information according to the path information;
when the first server and the second server do not have the direct communication right, the sending path is a first path, the first server sends the index information to the mobile terminal, the mobile terminal sends the index information to the second server, and the second server sends the index information to the projector;
when the first server and the second server have direct communication rights, the sending path is a second path, the first server sends the index information to the second server, and the second server sends the index information to the projector;
the projector acquires the voice information from the first server according to the index information;
the voice information also comprises an identity identification and an identification code of the mobile terminal; the server is also used for searching an identification code matched with the identity identification according to the identity identification; when the identification code matched with the identity identification is found, the server judges whether the found identification code is consistent with the identification code in the voice information, if so, the mobile terminal and the projector are matched;
the projector sends another voice message to the server; the other voice information is recorded through a remote controller of the projector;
the server storing the other voice information and generating other index information associated with the other voice information;
the server sends the other index information to the mobile terminal;
and the mobile terminal acquires the other voice information from the server according to the other index information.
CN201610803995.4A 2016-09-06 2016-09-06 Voice interaction method and system of projector Active CN106297804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610803995.4A CN106297804B (en) 2016-09-06 2016-09-06 Voice interaction method and system of projector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610803995.4A CN106297804B (en) 2016-09-06 2016-09-06 Voice interaction method and system of projector

Publications (2)

Publication Number Publication Date
CN106297804A CN106297804A (en) 2017-01-04
CN106297804B true CN106297804B (en) 2020-10-20

Family

ID=57710919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610803995.4A Active CN106297804B (en) 2016-09-06 2016-09-06 Voice interaction method and system of projector

Country Status (1)

Country Link
CN (1) CN106297804B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495863A (en) * 2018-09-21 2019-03-19 北京车和家信息技术有限公司 Exchange method and relevant device
CN113365123B (en) * 2021-06-02 2022-10-28 深圳市环球数码科技有限公司 Method, system and storage medium for operating cinema showing server by using light source chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203180963U (en) * 2012-09-27 2013-09-04 博雅网络游戏开发(深圳)有限公司 Asynchronous voice system used in mobile terminal games
CN204134197U (en) * 2014-08-07 2015-02-04 深圳市捷通语音电子有限公司 Intelligent toy system
CN105530239A (en) * 2015-11-26 2016-04-27 广州酷狗计算机科技有限公司 Multimedia data obtaining method and device
CN205407889U (en) * 2016-03-10 2016-07-27 张立秀 High in clouds voice service provides system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263929A (en) * 2010-05-31 2011-11-30 黄金富 Conference video information real-time publishing system and corresponding devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203180963U (en) * 2012-09-27 2013-09-04 博雅网络游戏开发(深圳)有限公司 Asynchronous voice system used in mobile terminal games
CN204134197U (en) * 2014-08-07 2015-02-04 深圳市捷通语音电子有限公司 Intelligent toy system
CN105530239A (en) * 2015-11-26 2016-04-27 广州酷狗计算机科技有限公司 Multimedia data obtaining method and device
CN205407889U (en) * 2016-03-10 2016-07-27 张立秀 High in clouds voice service provides system

Also Published As

Publication number Publication date
CN106297804A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN109040099B (en) Verification method, terminal and system for application
US9756092B2 (en) Distribution and synchronization of a social media environment
KR101642822B1 (en) Method, system, and apparatus for sharing application information
US20120239618A1 (en) File sharing mechanism
CN109951435B (en) Equipment identifier providing method and device and risk control method and device
CN106664547B (en) Mobile terminal, maintenance server, and maintenance method and device of virtual SIM card
CN106101982B (en) A kind of application program method for down loading and system
WO2014101763A1 (en) Method, terminal device, server and system for sharing information
KR20190067194A (en) Methods, devices, and servers for account login
CN110912808B (en) Message subscription method, device, system, equipment terminal and readable storage medium
CN106211159A (en) Personal identification method based on bluetooth and device
JP6223735B2 (en) SEARCH CONTROL DEVICE, SEARCH CONTROL METHOD, AND PROGRAM
CN108900690B (en) Voice message receiving and transmitting system and method
CN105072178A (en) Mobile phone number binding information acquisition method and device
KR102011181B1 (en) Delivery management server and delivery management method for delivering updated application
CN110445792B (en) Verification code generation method and verification code login system
CN104376022B (en) Data processing method and device
CN106297804B (en) Voice interaction method and system of projector
US20150237078A1 (en) Method and apparatus for playlist synchronization
CN111884843B (en) Message processing method and device, electronic equipment and storage medium
CN111159269B (en) Data processing method, device and system
CN113395686B (en) Terminal device friend-making method, system, electronic device and storage medium
CN106332000B (en) Terminal position information acquisition method and device
CN103701836A (en) Information processing method, terminal equipment and server
CN106790716B (en) Picture pushing method and system based on screen locking state

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 610094 Tianfu Software Park Area A, 1129 Century City Road, Chengdu High-tech Zone, Sichuan Province

Applicant after: Chengdu Jimi Technology Co., Ltd.

Address before: No. 1129 Tianfu Software Park A District Century City high tech Zone of Chengdu City, Sichuan Province Road 610000 7 5 storey building No. 501

Applicant before: CHENGDU XGIMI TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant