CN115379043A - Cross-device text continuing method and electronic device - Google Patents

Cross-device text continuing method and electronic device Download PDF

Info

Publication number
CN115379043A
CN115379043A CN202110539423.0A CN202110539423A CN115379043A CN 115379043 A CN115379043 A CN 115379043A CN 202110539423 A CN202110539423 A CN 202110539423A CN 115379043 A CN115379043 A CN 115379043A
Authority
CN
China
Prior art keywords
information
electronic equipment
electronic device
text content
electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110539423.0A
Other languages
Chinese (zh)
Other versions
CN115379043B (en
Inventor
徐文华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110539423.0A priority Critical patent/CN115379043B/en
Priority to PCT/CN2022/085233 priority patent/WO2022242343A1/en
Publication of CN115379043A publication Critical patent/CN115379043A/en
Application granted granted Critical
Publication of CN115379043B publication Critical patent/CN115379043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • G10L13/047Architecture of speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0442Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a method for text continuation across devices, which comprises the following steps: when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold, the first electronic equipment extracts first information, wherein the first information comprises information related to the text content; the first electronic equipment sends the first information to the second electronic equipment; and the second electronic equipment plays the voice corresponding to the text content according to the first information. The scheme provided by the application can realize the connection between cross-equipment and different media, and improve the experience of the user.

Description

Cross-device text continuing method and electronic device
Technical Field
The present application relates to the field of data processing, and in particular, to a method for text splicing across devices and an electronic device.
Background
With the rapid development of mobile device hardware technology and the vigorous demand of consumers, the front-end intelligent device is developed in a diversified manner, and more electronic devices have the media capability similar to a smart phone, such as a smart sound box, a watch, a large screen and the like. For example, compared with a mobile phone, the intelligent sound box can provide better tone quality and better voice interaction capability. The sound box equipment can be mutually supplemented with other equipment and matched together to realize the service connection capacity of cross equipment so as to meet the use habits of users in different scenes.
For the service connection capability of the cross-device, currently, playing and connection of audio can be realized, for example, a user listens to music on a mobile phone music Application (APP), the mobile phone is close to a sound box device, the music starts to be switched to the sound box device for playing, and the playing is stopped on the mobile phone; similarly, the user listens to music on the speaker device, brings the handset close to the speaker device, starts switching music to play on the handset, and stops playing on the speaker device. But this approach only enables inter-media (such as audio described above) continuation, not cross-media continuation.
At present, continuous text voice playing on the same client can also be realized, including: receiving a voice playing command of a user to the text; acquiring a text from a corresponding digital document of a server, and simultaneously playing a mid-point voice file; after the acquisition of the text is completed, checking whether the playing of the voice file at the termination point is completed, and if the playing is completed, starting voice generation and playing from a position in the text corresponding to the end of the voice file at the termination point; when a user sends a command of stopping the voice playing of the text, the position of the current playing stop in the text is recorded and the middle stop point is updated by using the position, and a voice file corresponding to a text segment with a set length before and after the current playing stop point in the text is generated and replaces the voice file at the playing stop point. The problem that the loading speed is too low when the digital document is played in a voice mode can be solved, the waiting time of a user is shortened, and the user experience is improved. However, this approach does not enable connectivity between different client types.
Disclosure of Invention
The application provides a method for cross-device text connection and electronic equipment, which can realize cross-device connection among different media and improve user experience.
In a first aspect, a method for text continuation across devices is provided, the method comprising: when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold value, the first electronic equipment extracts first information, wherein the first information comprises information related to the text content;
the first electronic equipment sends the first information to the second electronic equipment;
and the second electronic equipment plays the voice corresponding to the text content according to the first information.
According to the scheme, when the interface of the first electronic device displays the text content and the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold, the first electronic device can extract the first information and send the first information to the second electronic device, and therefore the second electronic device can play the voice corresponding to the text content according to the first information. When the first electronic equipment is close to the second electronic equipment, the second electronic equipment can play the text content of the first electronic equipment in an audio mode, so that connection among different media across equipment can be realized, and the user experience is improved.
With reference to the first aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the first aspect, in some possible implementations, the method further includes:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the first electronic device sends the first information to the second electronic device, including:
the first electronic equipment encrypts the first information by using the second information;
the first electronic equipment sends the encrypted first information to the second electronic equipment;
the method further comprises the following steps:
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
According to the scheme, the first information can be safely transmitted by encrypting the first information, so that the safety of information transmission can be ensured.
With reference to the first aspect, in some possible implementation manners, the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic device sends the text content to the second electronic device;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
According to the scheme provided by the application, the second electronic device can request the text content from the cloud server of the second electronic device according to the source address of the text content in the first information and the information of the user accessing the text content, and after receiving the text content, extracts the target text content from the text content according to the current offset position, and converts the target text content into voice for playing. The second electronic device extracts the target text content from the text content according to the current offset position, that is, the specific position of the user reading before can be located first, and the target text content is converted into voice to be played continuously, so that the text content can be continued between the second electronic device and the first electronic device, and further, the user experience can be improved.
With reference to the first aspect, in some possible implementations, the method further includes:
the second electronic equipment sends the first information to a cloud server of the second electronic equipment;
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic equipment extracts target text content from the text content according to the current offset position in the first information;
the cloud server of the second electronic equipment converts the target text content into voice;
the cloud server of the second electronic device sends the voice to the second electronic device;
and the second electronic equipment plays the voice.
According to the scheme provided by the application, the second electronic device can request the text content from the cloud server of the second electronic device according to the source address of the text content in the first information and the information of the user accessing the text content, and after receiving the request, the cloud server of the second electronic device extracts the target text content from the text content according to the current offset position, converts the target text content into voice and then sends the voice to the second electronic device, so that the second electronic device plays the voice. The voice received by the second electronic device is obtained by extracting the target text content from the text content according to the current offset position by the cloud server of the second electronic device and converting the target text content into the voice, so that the text content can be continued between the second electronic device and the first electronic device, and further, the user experience can be improved.
With reference to the first aspect, in some possible implementations, the method further includes:
the cloud server of the second electronic device caches the voice.
According to the scheme, the cloud server of the second electronic device can cache the converted voice, when other users read the same text content and need to play the text content through the loudspeaker box device, the cloud server can save the time for converting the text content into the voice, and the voice fragments cached by the cloud server can be sent to the loudspeaker box device, so that the efficiency can be improved.
With reference to the first aspect, in some possible implementations, the method further includes:
when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold value again, the second electronic device sends third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the first electronic equipment sends identity information of the first electronic equipment to the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
According to the scheme provided by the application, when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold value again, the second electronic device can send third information to the first electronic device to request identity information of the first electronic device, and after the second electronic device receives the identity information sent by the first electronic device, whether the first information is extracted or not is determined according to the information and the identity information of the second electronic device, so that the second electronic device is triggered to stop playing and continue to the first electronic device to continue reading. Therefore, the connection of the text content from the second electronic equipment to the first electronic equipment can be realized, and further, the experience of the user can be improved.
With reference to the first aspect, in some possible implementation manners, the determining, by the second electronic device, whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device includes:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
the cloud server of the second electronic equipment verifies the identity information of the first electronic equipment and the identity information of the second electronic equipment;
the cloud server of the second electronic equipment sends the verified result to the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the verified result.
With reference to the first aspect, in some possible implementation manners, the determining, by the second electronic device, whether to extract the first information according to the checked result includes:
if the checked result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or, alternatively,
if the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
the second electronic equipment sends the first information to the first electronic equipment;
the first electronic equipment receives the first information;
and the first electronic equipment determines the text position played by the second electronic equipment according to the first information.
According to the scheme, when the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information and sends the first information to the first electronic device, so that the first electronic device can determine the text position played by the second electronic device according to the first information, the connection of text contents from the second electronic device to the first electronic device can be achieved, and further, the experience of a user can be improved.
In a second aspect, a method of text continuation across devices is provided, the method comprising: the method comprises the steps that a second electronic device receives first information sent by a first electronic device, wherein the first information comprises information related to text content;
and the second electronic equipment plays the voice corresponding to the text content according to the first information.
According to the scheme provided by the application, the second electronic equipment can play the voice corresponding to the text content according to the received first information. Because the second electronic device can play the text content of the first electronic device in the form of audio, the connection between different media and cross devices can be realized, and the user experience is improved.
With reference to the second aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the second aspect, in some possible implementations, the method further includes:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the second electronic equipment receives first information sent by the first electronic equipment, and the method comprises the following steps:
the second electronic equipment receives the encrypted first information sent by the first electronic equipment;
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
According to the scheme, the first information can be safely transmitted by encrypting the first information, so that the safety of information transmission can be ensured.
With reference to the second aspect, in some possible implementation manners, the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic device receiving the text content;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
According to the scheme provided by the application, the second electronic device can request the text content from the cloud server of the second electronic device according to the source address of the text content in the first information and the information of the user accessing the text content, extract the target text content from the text content according to the current offset position after receiving the text content, and convert the target text content into voice to be played. The second electronic device extracts the target text content from the text content according to the current offset position, that is, the specific position of the user reading before can be located first, and the target text content is converted into voice to be played continuously, so that the text content can be continued between the second electronic device and the first electronic device, and further, the user experience can be improved.
With reference to the second aspect, in some possible implementation manners, the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic equipment receives voice sent by a cloud server of the second electronic equipment;
and the second electronic equipment plays the voice.
According to the scheme provided by the application, the second electronic device can request the text content from the cloud server of the second electronic device according to the source address of the text content in the first information and the information of the user accessing the text content, and the cloud server of the second electronic device sends voice to the second electronic device after receiving the request, so that the second electronic device plays the voice. The voice received by the second electronic device is obtained by processing the text content conversion by the cloud server of the second electronic device, so that the text content can be continued between the second electronic device and the first electronic device, and further, the user experience can be improved.
With reference to the second aspect, in some possible implementations, the method further includes:
when the distance between the first electronic device and the second electronic device is smaller than or equal to a first threshold value again, the second electronic device sends third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the second electronic equipment receives the identity information of the first electronic equipment sent by the first electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
According to the scheme provided by the application, when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold value again, the second electronic device can send third information to the first electronic device to request the identity information of the first electronic device, and after the second electronic device receives the identity information sent by the first electronic device, whether the first information is extracted or not is determined according to the information and the identity information of the second electronic device, so that the second electronic device is triggered to stop playing and continue to the first electronic device to continue reading. Therefore, the connection of the text content from the second electronic equipment to the first electronic equipment can be realized, and further, the experience of the user can be improved.
With reference to the second aspect, in some possible implementation manners, the determining, by the second electronic device, whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device includes:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to a verified result sent by a cloud server of the second electronic equipment.
With reference to the second aspect, in some possible implementation manners, the determining, by the second electronic device, whether to extract the first information according to a check result sent by a cloud server of the second electronic device includes:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or, alternatively,
if the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
and the second electronic equipment sends the first information to the first electronic equipment.
According to the scheme provided by the application, when the verified result is that the identity information of the first electronic equipment is matched with the identity information of the second electronic equipment, the second electronic equipment determines to extract the first information and sends the first information to the first electronic equipment, so that the first electronic equipment can determine the text position played by the second electronic equipment according to the first information, the connection of text contents from the second electronic equipment to the first electronic equipment can be realized, and further, the experience of a user can be improved.
In a third aspect, a method for text continuation across devices is provided, the method comprising:
when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold value, the first electronic equipment extracts first information, wherein the first information comprises information related to the text content.
And the first electronic equipment sends the first information to the second electronic equipment.
According to the scheme, when the interface of the first electronic device displays the text content and the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold, the first electronic device can extract the first information and send the first information to the second electronic device, so that the second electronic device can play the voice corresponding to the text content according to the first information. When the first electronic equipment is close to the second electronic equipment, the second electronic equipment can play the text content of the first electronic equipment in an audio mode, so that cross-equipment and connection among different media can be realized, and the experience of a user is improved.
With reference to the third aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the third aspect, in some possible implementations, the method further includes:
the first electronic device receiving second information, the second information comprising information for encryption;
the first electronic device sends the first information to the second electronic device, including:
the first electronic equipment encrypts the first information by using the second information;
and the first electronic equipment sends the encrypted first information to the second electronic equipment.
According to the scheme, the first information can be safely transmitted by encrypting the first information, so that the safety of information transmission can be ensured.
With reference to the third aspect, in some possible implementations, the method further includes:
the first electronic equipment receives third information, wherein the third information is used for requesting identity information of the first electronic equipment;
and the first electronic equipment sends the identity information of the first electronic equipment to the second electronic equipment.
According to the scheme provided by the application, the first electronic equipment receives third information sent by the second electronic equipment to request identity information of the first electronic equipment, and the first electronic equipment sends the identity information of the first electronic equipment to the second electronic equipment, so that the second electronic equipment determines whether to extract the first information according to the information and the identity information of the second electronic equipment, and the second electronic equipment is triggered to stop playing and continue to the first electronic equipment to continue reading. Therefore, the connection of the text content from the second electronic equipment to the first electronic equipment can be realized, and further, the experience of the user can be improved.
With reference to the third aspect, in some possible implementations, the method further includes:
the first electronic equipment receives the first information;
and the first electronic equipment determines the text position played by the second electronic equipment according to the first information.
According to the scheme provided by the application, after the first electronic device receives the first information, the text position played by the second electronic device can be determined according to the first information, so that the connection of text content from the second electronic device to the first electronic device can be realized, and further, the user experience can be improved.
In a fourth aspect, a system is provided, the system comprising:
a first electronic device to: when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold value, extracting first information, wherein the first information comprises information related to the text content;
sending the first information to the second electronic device;
the second electronic device is to: and playing the voice corresponding to the text content according to the first information.
With reference to the fourth aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
sending second information to the first electronic device, wherein the second information comprises information used for encryption;
the first electronic device is further to:
encrypting the first information using the second information;
sending the encrypted first information to the second electronic equipment;
the second electronic device is further to: and decrypting the encrypted first information to obtain the first information.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
requesting the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic device is further to: sending the text content to the second electronic device;
the second electronic device is further to: extracting target text content from the text content according to the current offset position of the text content in the first information;
converting the target text content into speech;
and playing the voice.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
sending the first information to a cloud server of the second electronic device;
requesting the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic device is further configured to: extracting target text content from the text content according to the current offset position in the first information;
converting the target text content into speech;
sending the voice to the second electronic device;
the second electronic device is further to: and playing the voice.
With reference to the fourth aspect, in some possible implementations, the cloud server of the second electronic device is further configured to: and caching the voice.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold value again, sending third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the first electronic device is further to: sending identity information of the first electronic equipment to the second electronic equipment;
the second electronic device is further to: and determining whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
sending the identity information of the first electronic device and the identity information of the second electronic device to a cloud server of the second electronic device;
the cloud server of the second electronic device is further configured to: verifying the identity information of the first electronic equipment and the identity information of the second electronic equipment;
sending a verified result to the second electronic device;
the second electronic device is further to: and determining whether to extract the first information according to the checked result.
With reference to the fourth aspect, in some possible implementations, the second electronic device is further configured to:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, determining not to extract the first information; or the like, or, alternatively,
if the verified result is that the identity information of the first electronic equipment is matched with the identity information of the second electronic equipment, determining to extract the first information;
sending the first information to the first electronic device;
the first electronic device is further to: receiving the first information;
and determining the text position played by the second electronic equipment according to the first information.
Please refer to the content of the first aspect, and details are not repeated.
In a fifth aspect, an electronic device is provided, which includes:
the electronic device comprises a communication module, a display module and a display module, wherein the communication module is used for receiving first information sent by first electronic equipment, and the first information comprises information related to text content;
and the playing module is used for playing the voice corresponding to the text content according to the first information.
With reference to the fifth aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the fifth aspect, in some possible implementations, the communication module is further configured to: sending second information to the first electronic device, wherein the second information comprises information used for encryption;
receiving encrypted first information sent by the first electronic equipment;
the electronic device further includes:
and the decryption module is used for decrypting the encrypted first information to obtain the first information.
With reference to the fifth aspect, in some possible implementation manners, the electronic device further includes:
the request module is used for requesting the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the communication module is further configured to: receiving the text content;
the electronic device further includes:
the extraction module is used for extracting target text content from the text content according to the current offset position of the text content in the first information;
the conversion module is used for converting the target text content into voice;
the playback module is further configured to: and playing the voice.
With reference to the fifth aspect, in some possible implementations, the electronic device further includes:
the request module is used for requesting the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the communication module is further to: receiving voice sent by a cloud server of the second electronic equipment;
the playback module is further configured to: and playing the voice.
With reference to the fifth aspect, in some possible implementations, the communication module is further configured to:
when the distance between the first electronic equipment and the second electronic equipment is smaller than or equal to a first threshold value again, third information is sent to the first electronic equipment, and the third information is used for requesting identity information of the first electronic equipment;
receiving identity information of the first electronic equipment sent by the first electronic equipment;
the electronic device further includes:
and the determining module is used for determining whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
With reference to the fifth aspect, in some possible implementations, the communication module is further configured to:
sending the identity information of the first electronic device and the identity information of the second electronic device to a cloud server of the second electronic device;
the determination module is further to: and determining whether to extract the first information according to a checked result sent by the cloud server of the second electronic device.
With reference to the fifth aspect, in some possible implementations, the determining module is further configured to:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, determining not to extract the first information; or the like, or, alternatively,
if the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, determining to extract the first information;
the communication module is further to:
and sending the first information to the first electronic equipment.
For the beneficial effects of the fifth aspect, please refer to the contents of the second aspect, which is not described again.
In a sixth aspect, an electronic device is provided, which includes:
the extraction module is used for extracting first information when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold, wherein the first information comprises information related to the text content.
A communication module, configured to send the first information to the second electronic device.
With reference to the sixth aspect, in some possible implementations, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
With reference to the sixth aspect, in some possible implementations, the communication module is further configured to:
receiving second information, the second information comprising information for encryption;
the electronic device further includes:
the encryption module is used for encrypting the first information by utilizing the second information;
the communication module is further to: and sending the encrypted first information to the second electronic equipment.
With reference to the sixth aspect, in some possible implementations, the communication module is further configured to:
receiving third information, wherein the third information is used for requesting identity information of the first electronic equipment;
and sending the identity information of the first electronic equipment to the second electronic equipment.
With reference to the sixth aspect, in some possible implementations, the communication module is further configured to:
receiving the first information;
the electronic device further includes:
and the determining module is used for determining the text position played by the second electronic equipment according to the first information.
For the beneficial effects of the sixth aspect, please refer to the content of the third aspect, which is not described again.
In a seventh aspect, an electronic device is provided, including: one or more processors; a memory; one or more application programs; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the method of any one of the second or third possible implementations described above.
In an eighth aspect, a chip system is provided, which includes at least one processor, and when program instructions are executed in the at least one processor, the functions of the method in any one of the possible implementations of the first aspect to the third aspect are implemented on the electronic device.
In a ninth aspect, there is provided a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of the possible implementations of the first to third aspects.
A tenth aspect provides a computer program product for causing an electronic device to perform the method of any one of the possible designs of the first to third aspects described above, when the computer program product is run on the electronic device.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application.
Fig. 3 is a schematic view of a scenario applied in an embodiment of the present application.
FIG. 4 is a schematic diagram of a set of GUIs provided by an embodiment of the present application.
FIG. 5 is a schematic diagram of another set of GUIs provided by an embodiment of the present application.
Fig. 6 is a schematic diagram of a method for text continuation across devices according to an embodiment of the present application.
Fig. 7 is a schematic diagram of another method for text continuation across devices according to an embodiment of the present application.
Fig. 8 is a schematic diagram of another cross-device text continuation method according to an embodiment of the present application.
Fig. 9 is a schematic diagram of another method for text continuation across devices according to an embodiment of the present application.
Fig. 10 is a schematic diagram of still another method for text continuation across devices according to an embodiment of the present application.
Fig. 11 is a schematic diagram of another method for text continuation across devices according to an embodiment of the present application.
Fig. 12 is a schematic block diagram of another electronic device provided in an embodiment of the present application.
Fig. 13 is a schematic block diagram of another electronic device provided in an embodiment of the present application.
Fig. 14 is a schematic block diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
The method for text connection across devices provided in the embodiment of the present application may be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a sound box device, and the like, and the embodiment of the present application does not limit the specific type of the electronic device.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The extraction of the first information for the text continuation across the devices and the encryption of the first information in the embodiment of the present application may be implemented by the processor 110.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display text, images, video, etc., such as the text of FIG. 4 or FIG. 5 below. The display screen 194 includes a display panel. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, APP1, APP2, and other applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
With the rapid development of mobile device hardware technology and the vigorous demand of consumers, as the intelligent devices at the front end side develop to diversification, more and more electronic devices have the media capability similar to a smart phone, such as smart speakers, watches, large screens and the like. For example, compared with a mobile phone, the intelligent sound box can provide better tone quality and better voice interaction capability. The sound box equipment can be mutually supplemented with other equipment and matched with the other equipment to realize the service connection capability of cross equipment so as to meet the use habits of users in different scenes.
Aiming at the service connection capability of cross-equipment, the playing connection of audio can be realized at present, for example, a user listens to music on a mobile phone music APP, the mobile phone is close to a sound box device, the music starts to be switched to the sound box device for playing, and the playing is stopped on the mobile phone; similarly, the user listens to music on the speaker device, brings the handset close to the speaker device, starts switching music to play on the handset, and stops playing on the speaker device. But this approach only enables inter-media (such as audio described above) continuation, not cross-media continuation.
At present, continuous text voice playing on the same client can also be realized, including: receiving a voice playing command of a user to a text; acquiring a text from a corresponding digital document of a server, and simultaneously playing a mid-point voice file; after the acquisition of the text is completed, checking whether the playing of the voice file at the termination point is completed, and if the playing is completed, starting voice generation and playing from a position in the text corresponding to the end of the voice file at the termination point; when a user sends a command of stopping the voice playing of the text, the position of the current playing stop in the text is recorded and the middle stop point is updated by using the position, and a voice file corresponding to a text segment with a set length before and after the current playing stop point in the text is generated and replaces the voice file at the playing stop point. The problem that the loading speed is too low when the digital document is played in a voice mode can be solved, the waiting time of a user is shortened, and the user experience is improved. However, this approach does not enable connectivity between different client types.
The application provides a method for cross-device text connection, which can realize cross-device connection among different media and improve user experience.
The application takes a mobile phone and a sound box device as examples, and simply introduces a scene in which the scheme of the application can be applied.
Fig. 3 is a schematic view of a scenario applied in the embodiment of the present application. The APP1 on the mobile phone can be a novel APP, and the APP2 can be a news information APP. For text contents on the mobile phone, audio playing can be supported on the sound box device, and meanwhile, the text contents that the sound box audio is connected back to the mobile phone side are supported to be continuously checked by a user.
Speaker equipment: the switching capability of the normal antenna and the weak antenna is provided, and a secret key for safety encryption can be provided when the antenna is switched to the weak antenna; and when the antenna is switched to the normal antenna, the encrypted information can be transmitted to the mobile phone. For the sound box equipment, the text at the mobile phone side can be supported to be connected into the audio at the sound box side; while the reverse supports splicing the audio on the loudspeaker side into text on the mobile phone side.
Sound box cloud: and content acquisition skills are provided, and the acquisition of text content is realized.
For convenience of understanding, the following embodiments of the present application will specifically describe a method for text continuation across devices, which is provided in the embodiments of the present application, by taking an electronic device with a structure shown in fig. 1 and fig. 2 as an example, and combining the drawings and an application scenario.
Fig. 4 shows a set of Graphical User Interfaces (GUIs) of a mobile phone, where from (a) in fig. 4 to (e) in fig. 4, a method for connecting APP2 in the mobile phone with a speaker device and implementing cross-device text continuation is shown.
Referring to the GUI shown in fig. 4 (a), the GUI is a desktop of a mobile phone. When the mobile phone detects that the user clicks the icon 401 of App2 on the desktop, app2 can be started, and a GUI as shown in fig. 4 (b) is displayed, which can be referred to as a news home interface.
Referring to the GUI shown in fig. 4 (b), a plurality of news-related information is displayed, and when the mobile phone detects that the user clicks on a certain news frame 402, the GUI shown in fig. 4 (c) may be displayed.
Referring to the GUI shown in fig. 4 (c), the interface displays the content related to the news, the user may start reading the content related to the news, and if the user does not want to continue reading with the mobile phone for a while reading line 6, the mobile phone may be moved to a position near the speaker device, for example, within 30cm of the speaker device, the text content on line 6 and the subsequent text content may be played on the speaker device, and at this time, the mobile phone interface still displays the current page (assuming that the speaker device and the mobile phone are connected under the same Wi-Fi).
After the sound box device plays for a period of time, if the user wants to continue reading with the mobile phone, the mobile phone can be moved to the position near the sound box device again, the sound box device stops playing the text content, and meanwhile, the mobile phone interface can display the specific position of the sound box device for playing the content.
For example, assuming that when the speaker device plays to line 13 of the current page, the user moves the mobile phone to the vicinity of the speaker device again, the speaker device stops playing the text content, and the mobile phone interface displays a GUI as shown in (d) of fig. 4. Referring to the GUI shown in (d) of fig. 4, it can be seen that the font of the 13 th line in the figure is darker in color than the fonts of the other lines. Thus, the user may know that the speaker device has played the text content to line 13 when the phone is moved again into proximity of the speaker device, so that the user may continue to read backwards from line 13.
Illustratively, assuming that the speaker device plays to line 3 of another page (which may be the next page or the next page, etc.), the user moves the handset to the proximity of the speaker device again, and the speaker device stops playing the text content while the handset interface displays the GUI as shown in (e) of fig. 4. Referring to the GUI shown in (e) of fig. 4, it can be seen that the interface displays a refreshed page, and the font of the 3 rd row of the figure is darker than the fonts of the other rows. Therefore, the user can know that the sound equipment has played the text content to the 3 rd line of the page when the mobile phone moves to the vicinity of the sound equipment again, so that the user can continue to read backwards from the 3 rd line of the page.
Fig. 5 shows another set of GUIs of a mobile phone, wherein from (a) in fig. 5 to (f) in fig. 5, a method for APP2 in the mobile phone to connect with a speaker device and implement cross-device text continuation is shown.
Referring to the GUI shown in fig. 5 (a), the GUI is a desktop of a mobile phone. When the mobile phone detects that the user clicks the icon 501 of App2 on the desktop, app2 can be started, and a GUI as shown in (b) in fig. 5 is displayed, and the GUI may be referred to as a news homepage interface.
Referring to the GUI shown in (b) of fig. 5, which shows a plurality of news-related information, when the mobile phone detects that the user clicks the icon 502 of App2, the GUI shown in (c) of fig. 5 may be displayed.
Referring to the GUI shown in fig. 5 (c), the interface displays the content related to the news, the user can start reading the content related to the news, and if the user does not want to continue reading with the mobile phone for a while reading the 6 th line, the 6 th line can be slid to the 1 st line of the interface of the mobile phone by hand, and the GUI shown in fig. 5 (d) is displayed. And then moving the mobile phone to the vicinity of the sound box device, if the mobile phone is moved to a position within 30cm of the sound box device, then the 6 th line (the 1 st line of the current interface) and subsequent text contents can be played on the sound box device (assuming that the sound box device and the mobile phone are connected under the same Wi-Fi).
After the sound box device is played for a period of time, if the user wants to continue reading with the mobile phone, the mobile phone can be moved to the position near the sound box device again, the sound box device stops playing text contents, and meanwhile, the mobile phone interface can display the specific position of the sound box device for playing the contents.
Illustratively, assuming that the speaker device plays to line 13 of the current page, and the user moves the handset to the proximity of the speaker device again, the speaker device stops playing the text content, and the handset interface displays a GUI as shown in (e) of fig. 5. Referring to the GUI shown in (e) of fig. 5, it can be seen that the font of the 13 th line in the figure is darker in color than the fonts of the other lines. Thus, the user may know that the speaker device has played the text content to line 13 when the phone is moved again into proximity of the speaker device, so that the user may continue to read backwards from line 13.
Illustratively, assuming that the speaker device plays to line 3 of another page (which may be the next page or the next page, etc.), the user moves the handset to the proximity of the speaker device again, and the speaker device stops playing the text content while the handset interface displays a GUI as shown in (f) of fig. 5. Referring to the GUI shown in (f) of fig. 5, it can be seen that the interface displays a refreshed page, and the font of the 3 rd line in the figure is darker than the fonts of the other lines. Therefore, the user can know that the speaker device has played the text content to line 3 of the page when the mobile phone moves near the speaker device again, so that the user can continue to read backward from line 3 of the page.
It should be noted that, if the user opens APP1, the process of text continuation is similar to the process of fig. 4 or fig. 5, and is not described again. In addition, APP1 and APP2 shown above are only exemplary, and may also be other APPs including text content, and the present application should not be particularly limited.
The embodiment of the application can also be applied to a dictionary pen and a sound box device, the dictionary pen is close to or touches the sound box device for the first time, then the sound box device starts to play words, the dictionary pen is close to or touches the sound box device again, then the sound box device stops playing, and the dictionary pen obtains the index which is playing at present and locates the current words.
The internal implementation process and the judgment logic for implementing cross-device text continuation in the embodiment of the present application are described below with reference to fig. 6 to 8. Fig. 6 shows a schematic diagram of a method 600 for text continuation across devices according to an embodiment of the present application, where the method 600 may include steps S610 to S640.
S610, the mobile phone subscribes to the Wi-Fi sensing service of the sound box device, the user enables the mobile phone to be close to the sound box device, and the sound box device finds the mobile phone.
In the embodiment of the application, the Wi-Fi sensing service for subscribing the loudspeaker box device by the mobile phone can be set in the mobile phone in advance, and when a user approaches the mobile phone to the loudspeaker box device, the loudspeaker box device can find the mobile phone through Wi-Fi due to the fact that connection is established between the loudspeaker box device and the mobile phone through Wi-Fi.
Wherein, above-mentioned cell-phone is close to audio amplifier equipment, and audio amplifier equipment discovery cell-phone can be understood as: when the distance between the mobile phone and the sound box equipment is smaller than a certain threshold value, the sound box equipment can find the mobile phone. For example, assuming that the threshold is 30cm, the sound box device may find the mobile phone if the mobile phone is less than 30cm away from the sound box device.
And S612, the sound box equipment recognizes that the mobile phone is close to the sound box equipment, the current state is a reading state, and the trigger antenna is a weak antenna.
And S614, the sound box device dynamically generates an encryption key and encrypts the key by using a private key.
And S616, the sound box equipment sends the encrypted secret key to the mobile phone.
And S618, after the sound box equipment successfully sends the sound box equipment, switching the antenna to a normal mode.
In the embodiment of the application, when the speaker device recognizes that the mobile phone is approaching itself gradually and the mobile phone is currently in a reading state (for example, a current interface of the mobile phone displays a certain news or a certain chapter part of a certain novel), the speaker device may trigger the antenna to be a weak antenna. As shown in fig. 3, when the antenna is a weak antenna, a secret key for secure encryption may be provided, so that the speaker device may dynamically generate the secret key at this time, encrypt the secret key with the private key, and then send the encrypted secret key to the mobile phone. After the speaker equipment successfully sends the encrypted secret key to the mobile phone, the antenna is triggered to be converted into a normal mode. It should be noted that, in the normal mode, the speaker device may transmit information with the mobile phone.
S620, the mobile phone triggers a text continuing request, extracts the content source information and the current content offset position, and simultaneously acquires the user accessing the content source and the authentication information.
In this embodiment, the step S620 may be executed at the same time as any one of the steps S612, S614, S616, and S618, or may be executed after the step S618, which is not limited.
After triggering the text continuation request, the mobile phone can extract a plurality of information including content source information and the current content offset position, and simultaneously acquire the user accessing the content source and authentication information. The content source information may be a source address of content currently accessed by the user, the offset position of the current content may be a difference between a line number currently read by the user and a first line or a difference between a word currently read by the user and a first word in the first line, and the user and the authentication information accessing the content source may be information related to the user, such as whether the user normally registers or logs in on the APP, that is, whether the user is legal or not.
It should be noted that the current content position offset may be recognized by the camera on the mobile phone, that is, the line number of the user looking at the mobile phone page by the eyeball of the user may be recognized by the camera, as shown in (d) in fig. 4, if the user does not want to continue reading with the mobile phone temporarily when reading the line 6, the mobile phone may be moved to the vicinity of the speaker device, and at this time, the mobile phone may shoot the line number of the page looked at by the user through the camera, and calculate the difference between the line number and the line 1. Of course, in some embodiments, the mobile phone may also recognize the word that the user looks at through the camera, and calculate the difference between the word and the line 1, word 1.
Alternatively, the current content offset position may also be obtained directly through a processor inside the mobile phone without a camera, as shown in (d) in fig. 5, if the user does not want to continue reading with the mobile phone temporarily when reading the line 6, the line 6 may be slid to the line 1 of the mobile phone interface by hand, and the mobile phone knows that the user reads the line 6, so that the mobile phone may calculate the current content offset position as the difference between the line 6 and the line 1.
S622, the mobile phone decrypts the encrypted key to obtain the key, and encrypts the information by using the key.
And S624, sending the encrypted information.
In this embodiment of the application, after the mobile phone extracts the information (that is, the information includes content source information and a current content offset position, and meanwhile, the user accessing the content source and the authentication information are acquired), the information may be encrypted by using a previously received key (the key may be obtained by decryption) sent by the sound box device, and the encrypted information is sent to the sound box device.
And S626, the sound box equipment decrypts the received information and stores the decrypted information.
In the embodiment of the application, after receiving the information sent by the mobile phone, the sound box device may decrypt the information by using the key, thereby obtaining the content source information and the current content offset position, and simultaneously obtaining the user accessing the content source and the authentication information, and storing the decrypted information.
It can be understood that, in step S614, the sound box device dynamically generates an encryption key, and encrypts the encryption key by using a private key, since the encryption key, the private key, and the public key are all generated by the sound box device, and the private key and the public key are a key pair, if the encryption key is used, decryption is required by using the public key, and if the encryption key is used, decryption is required by using the private key. After receiving the encrypted key, the mobile phone can decrypt the encrypted key by using the public key to obtain the key, and then encrypt the information by using the key, and after receiving the encrypted information, the sound box device can decrypt the encrypted information by using the key.
And S628, sending request information for requesting a content text to the sound box cloud based on the decrypted information.
In the embodiment of the application, the sound box device can obtain the information (including content source information and current content offset position, and simultaneously obtain the user accessing the content source and the authentication information) through decryption, and simultaneously can send request information to the sound box cloud by using the content source information, the user accessing the content source and the authentication information, wherein the request information is used for requesting a corresponding content text.
Exemplarily, if the content source information obtained after decryption is a source address of a certain piece of news, the request information is used for requesting a content text related to the news from the speaker cloud; and if the content source information obtained after decryption is the source address of a certain novel, the request information is used for requesting the content text related to the novel from the sound box cloud.
And S630, the sound box cloud verifies the user information, and if the user information passes the verification, the content text is obtained.
And S632, the sound box cloud sends the acquired content text to the sound box equipment.
In the embodiment of the application, after the sound box cloud receives the request information sent by the sound box device, the request information may include content source information, a user accessing the content source, and authentication information, and the sound box cloud may first verify the user to verify whether the user is a legitimate user. If the verification is successful, the sound box cloud requests the cloud server corresponding to the APP for acquiring the corresponding content text based on the content source information, and sends the corresponding content text to the sound box equipment after acquiring the corresponding content text; if not, the verification is not passed, and the sound box cloud does not request the cloud server corresponding to the APP to acquire the corresponding content text.
And S634, the sound box equipment positions to a specific position based on the content text and the decrypted information, and extracts the text to be played.
And S636, converting the extracted text into voice by the sound box equipment, and playing.
In this embodiment of the application, after receiving the content text sent by the speaker cloud, the speaker device may locate the specific position that the user read before based on the information "current content offset position" obtained after decryption in step S626, extract the text after the position, and convert the text into a voice to play.
As shown in the GUI in fig. 4 (c) or fig. 5 (d), if the user does not want to continue reading with the mobile phone for a while when reading the line 6, and can move the mobile phone to the vicinity of the audio equipment, the audio equipment can extract the text content on the line 6 and the subsequent text content and convert the text content into voice for playing after acquiring the content text.
It should be noted that, in the embodiment of the present application, when the sound box device converts a text into a speech, the text content on a certain page may be subjected to fragmentation processing. For example, the contents in the 6 th to 9 th rows may be subjected to the speech processing first, and in the process of playing the segment of speech, the contents in the 10 th to 12 th rows may be subjected to the speech processing, and so on until the contents of the page are all played.
Or, the sound box device may also perform voice processing on the content of the section including the 6 th row and then, during the process of playing the section of voice, perform voice processing on the content of the next section, and so on until the content of the page is completely played.
S638, the content offset is refreshed.
And S640, when the playing of the page content is finished, triggering to acquire the next page of content.
In the embodiment of the present application, if the user has read to line 6 of page 1 before and the content on page 1 has been completely played through the above steps, the steps S630-S638 are continued until all the content read by the user is completely played.
Fig. 7 is a schematic diagram illustrating a method 700 for text continuation across devices according to an embodiment of the present application, where the method 700 may include steps S710-S738.
S710, the mobile phone subscribes to the Wi-Fi sensing service of the sound box device, the user enables the mobile phone to be close to the sound box device, and the sound box device finds the mobile phone.
And S712, the sound box equipment identifies that the mobile phone is close to the sound box equipment, the current state is a reading state, and the trigger antenna is a weak antenna.
S714, the sound box device dynamically generates an encryption key, and encrypts the encryption key using a private key.
And S716, the sound box equipment sends the encrypted key to the mobile phone.
And S718, after the sound box equipment successfully sends the sound box equipment, switching the antenna to a normal mode.
S720, the mobile phone triggers a text continuing request, extracts the content source information and the current content offset position, and simultaneously acquires the user accessing the content source and the authentication information.
S722, the mobile phone decrypts the encrypted key to obtain the key, and encrypts the information by using the key.
And S724, sending the encrypted information.
And S726, the sound box equipment decrypts the received information and stores the decrypted information.
And S728, sending request information for requesting the content text to the sound box cloud based on the decrypted information.
And S730, the sound box cloud verifies the user information, and if the user information passes the verification, the content text is obtained.
For details of steps S710-S730, reference may be made to the related description of steps S610-S630, and details are not repeated here.
S732, the sound box cloud is positioned to a specific position based on the content text and the decrypted information, and the text to be played is extracted to generate voice.
S734, the sound box cloud sends the generated voice fragment, and the sound box cloud caches the generated voice.
In this embodiment of the application, after the sound box cloud acquires the relevant content text, the position where the user reads before the information "current content offset position" acquired after decryption in step S726 may be located, and the text after the position is extracted and converted into speech.
As shown in the GUI shown in fig. 4 (c) and fig. 5 (d), if the user does not want to continue reading with the mobile phone temporarily when reading line 6, and can move the mobile phone to the vicinity of the speaker device, the speaker cloud can extract and convert the text content on line 6 and subsequent text content into voice after acquiring the content text, and then can divide the converted voice into segments and send the segments to the speaker device.
It should be noted that, in the embodiment of the present application, when the sound box cloud converts a text into a speech, the text content on a certain page may be subjected to fragmentation processing. For example, the contents in the 6 th to 9 th rows may be subjected to the speech processing first, and in the process of playing the segment of speech, the contents in the 10 th to 12 th rows may be subjected to the speech processing, and so on until the contents of the page are all played.
Or, the sound box cloud may perform voice processing on the content of the section including the 6 th row and then, during the process of playing the section of voice, perform voice processing on the content of the next section, and so on until the content of the page is completely played.
It should be noted that, in this embodiment of the application, after the speaker cloud generates the corresponding voice, when the voice is sent to the speaker device through the voice segment, the voice segment may also be cached to the cloud server, so that, when other users read the same text content and need to play the text content by the speaker device, the speaker cloud may save the time for converting the text content into the voice, and may send the voice segment cached by the cloud server to the speaker device, thereby improving the efficiency.
And S736, the sound box device plays the voice clip.
S738, after the playback is completed, triggers loop processing S728-S736.
In the embodiment of the application, the sound box equipment can play the voice fragment after receiving the voice fragment sent by the sound box cloud. It should be noted that although the sound box device receives the voice segments, the voice heard by the user is continuous within the acceptable range of human ears.
In addition, in the embodiment of the application, after the content source information is completely acquired and played in a whole page, the sound box cloud can trigger audio integration and store the content source information, the audio offset and the information mapped by the content text offset; if the content source information can be matched with complete audio, directly issuing a complete audio playing address, content source information, audio offset, content offset and audio offset mapping information without executing the steps 730 to 738; the sound box equipment plays corresponding complete audio and caches content source information, content offset and audio offset information at the same time.
The content that the mobile phone reads and continues to the sound box device for playing is described above with reference to fig. 6 and fig. 7, and the content that the sound box device plays and continues to the mobile phone for reading will be described below with reference to fig. 8.
Fig. 8 shows a schematic diagram of a method 800 for text continuation across devices according to an embodiment of the present application, where the method 800 may include steps S810-S834.
S810, the mobile phone subscribes to the Wi-Fi sensing service of the sound box device, the user enables the mobile phone to be close to the sound box device, and the sound box device finds the mobile phone.
And S812, the sound box equipment recognizes that the mobile phone is close to the sound box equipment, the current state is a reading state, and the trigger antenna is a weak antenna.
S814, the sound box device dynamically generates an encryption key and encrypts the key by using a private key.
And S816, the sound box equipment sends the encrypted secret key to the mobile phone.
And S818, after the sound box equipment is successfully sent, switching the antenna to a normal mode.
For the specific contents of steps S810-S818, reference may be made to the related descriptions of steps S610-S618, and details are not repeated herein.
And S820, the sound box device triggers the text to continue back to the mobile phone.
And S822, triggering the verification of the mobile phone user.
In the embodiment of the application, it is assumed that the sound box device plays the voice corresponding to the text content at the current moment, if the user brings the mobile phone close to the sound box device, the text content can be triggered to continue back to the mobile phone, that is, the sound box device stops playing the voice, and the mobile phone interface displays the text corresponding to the voice when the sound box device stops playing.
In the embodiment of the present application, the verification of the speaker device triggering on the mobile phone user can be understood as: the sound box device sends a request check message to the mobile phone, where the check message is used to request the mobile phone to send relevant information of the mobile phone, such as authentication information of an account of an APP (e.g., APP1 or APP2 in fig. 4 or fig. 5).
S824, the handset returns the user credentials.
After receiving the request verification information sent by the sound box equipment, the mobile phone can send the authentication information of the account of the APP to the sound box equipment, such as can send a character string, so as to facilitate the verification of the sound box cloud.
And S826, the sound box equipment sends the authentication information acquired from the mobile phone side and the information of the sound box equipment to the sound box cloud, and requests identity verification.
And S828, returning a check result.
In the embodiment of the application, after receiving the authentication information sent by the mobile phone, the loudspeaker box device can send the authentication information acquired from the mobile phone side and the authentication information of the loudspeaker box device to the loudspeaker box cloud. After receiving the information sent by the sound box equipment, the sound box cloud can check the two equipment and send the check result to the sound box equipment. If the authentication information of the mobile phone side can be matched with the authentication information of the sound box equipment, the sound box cloud can consider that the user holding the mobile phone is legal; if the authentication information of the mobile phone side is not matched with the authentication information of the loudspeaker box equipment, the loudspeaker box cloud can determine that the user holding the mobile phone is illegal.
And S830, after the identity of the mobile phone user is confirmed, extracting the content source information and the current content offset position, and encrypting the content source information and the current content offset position by using the encrypted key.
S832, the encrypted information is transmitted.
In the embodiment of the application, after the sound box equipment receives the verification result sent by the sound box cloud, the next step can be carried out according to the verification result. If the verification result is that the authentication information of the mobile phone side is matched with the authentication information of the sound box equipment, at the moment, the sound box equipment extracts content source information and the current content offset position, encrypts the information by using the encrypted secret key and sends the encrypted information to the mobile phone; if the verification result is that the authentication information of the mobile phone side is not matched with the authentication information of the sound box equipment, the sound box equipment cannot extract relevant information.
For example, as shown in the GUI in fig. 4 (d) or fig. 5 (e), when the speaker device plays to line 13, and the user moves the mobile phone to the vicinity of the speaker device again, the speaker device stops playing the text content, and at this time, the speaker device may extract the difference between the currently played line number and the line number in line 1, encrypt the difference, and send the encrypted difference to the mobile phone.
For another example, as shown in the GUI in fig. 4 (e) or fig. 5 (f), if the speaker device plays to the 3 rd line of another page (the another page may be a next page or a next page, etc.), and the user moves the mobile phone to the vicinity of the speaker device again, the speaker device stops playing the text content, and at this time, the speaker device may extract the difference between the number of lines played on the current page and the number of lines on the 1 st line, encrypt the difference, and transmit the encrypted difference to the mobile phone.
Of course, in some embodiments, the sound box device may also extract the difference between the currently played word and the 1 st word in the 1 st line, encrypt the difference, and send the encrypted difference to the mobile phone.
And S834, decrypting the received information, retrieving the text content based on the decrypted information, and positioning to the corresponding position.
In the embodiment of the application, after receiving the information sent by the sound box device, the mobile phone can decrypt the information, and can reposition the position played by the sound box device before based on the decrypted information, so that the mobile phone can continue to read.
For example, as in the GUI shown in (d) in fig. 4 or (e) in fig. 5 described above, the font of the 13 th line in the figure is darker than the fonts of the other lines. Thus, the user may know that the audio equipment has played the text content to line 13 so that the user may continue reading backward from line 13.
Illustratively, as shown in the GUI of fig. 4 (e) or fig. 5 (f) above, it can be seen that the interface displays a refreshed page, and the font of the 3 rd row in the figure is darker than the fonts of the other rows. Thus, the user may know that the speaker device played to line 3 of the page so that the user may continue reading backward from line 3.
If the decrypted information includes the difference between the words played by the sound box device before and the 1 st word in the 1 st line, a certain word in the mobile phone interface can be blackened, so that the user can continuously read backwards from the word.
In addition, in some embodiments, if the handset and the speaker device only support the connection of the text content of a single page, the text content can be delivered. As described above, if the user does not want to continue reading with the mobile phone temporarily while reading line 6, the mobile phone can be moved to the vicinity of the speaker device, and at this time, the mobile phone can transmit the text content on line 6 and beyond of the current page to the speaker device, so that the text content on line 6 and beyond can be played on the speaker device.
The following describes a flow of a cross-device text continuation method provided by the present application.
Referring to fig. 9, fig. 9 shows a schematic flow diagram of a method 900 of text continuation across devices.
As shown in fig. 9, the method 900 may include:
s910, when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold, the first electronic equipment extracts first information, wherein the first information comprises information related to the text content.
In the implementation of the present application, the first electronic device may be a mobile phone in the method 600-800 described above, and the second electronic device may be a sound box device in the method 600-800 described above. The first threshold may be 30cm in the above embodiment, and when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold and the interface of the first electronic device displays text content, the first electronic device may extract the first information.
It should be understood that the first threshold may be other values, and should not be particularly limited in this application.
Optionally, the first information includes at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
S920, the first electronic device sends the first information to the second electronic device.
S930, the second electronic device plays the voice corresponding to the text content according to the first information.
In this embodiment of the application, the second electronic device may play the voice corresponding to the text content according to the first information, which may be understood as: the second electronic device can play the voice corresponding to the current page based on the first information. In a specific implementation, the second electronic device may start playing from the 1 st line of the current page, or the second electronic device may also start playing from the nth line of the current page, where n is a current offset line number of the text content, that is, a line number that a user is reading when the first electronic device is close to the second electronic device.
According to the scheme, when the interface of the first electronic device displays the text content and the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold, the first electronic device can extract the first information and send the first information to the second electronic device, and therefore the second electronic device can play the voice corresponding to the text content according to the first information. When the first electronic equipment is close to the second electronic equipment, the second electronic equipment can play the text content of the first electronic equipment in an audio mode, so that cross-equipment and connection among different media can be realized, and the experience of a user is improved.
Optionally, in some embodiments, the method 900 further comprises:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the first electronic device sends the first information to the second electronic device, including:
the first electronic equipment encrypts the first information by using the second information;
the first electronic equipment sends the encrypted first information to the second electronic equipment;
the method 900 further comprises:
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
The second information in the implementation of the present application may be the secret key, the private key, the public key, etc. in the above methods 600-800. After the first electronic device receives the second information used for encryption, the first information can be encrypted by using the second information, and the encrypted information is sent to the second electronic device.
According to the scheme, the first information can be safely transmitted by encrypting the first information, so that the safety of information transmission can be ensured.
In this embodiment of the present application, the second electronic device plays the voice corresponding to the text content according to the first information, and may be implemented in multiple ways.
The first method is as follows:
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic device sends the text content to the second electronic device;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
The second method comprises the following steps:
the method 900 further comprises:
the second electronic equipment sends the first information to a cloud server of the second electronic equipment;
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic equipment extracts target text content from the text content according to the current offset position in the first information;
the cloud server of the second electronic equipment converts the target text content into voice;
the cloud server of the second electronic device sends the voice to the second electronic device;
and the second electronic equipment plays the voice.
Optionally, in some embodiments, the method 900 further comprises:
the cloud server of the second electronic device caches the voice.
For the above two ways, reference may be made to the descriptions in methods 600 and 700 above, and the description thereof is omitted here.
Further, in some embodiments, the method 900 further comprises:
when the distance between the first electronic equipment and the second electronic equipment is smaller than or equal to the first threshold value again, the second electronic equipment sends third information to the first electronic equipment, wherein the third information is used for requesting identity information of the first electronic equipment;
the first electronic equipment sends the identity information of the first electronic equipment to the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
Optionally, in some embodiments, the determining, by the second electronic device, whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device includes:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
the cloud server of the second electronic device verifies the identity information of the first electronic device and the identity information of the second electronic device;
the cloud server of the second electronic equipment sends the verified result to the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the checked result.
Optionally, in some embodiments, the determining, by the second electronic device, whether to extract the first information according to the checked result includes:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or a combination thereof,
if the checked result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
and the second electronic equipment sends the first information to the first electronic equipment.
Optionally, in some embodiments, the method 900 further comprises:
the first electronic equipment receives the first information;
and the first electronic equipment determines the text position played by the second electronic equipment according to the first information.
For the above, reference may be made to the description of the method 800, which is not repeated herein.
FIG. 10 shows a schematic flow diagram of another method 1000 of text continuation across devices.
As shown in fig. 10, the method 1000 may include:
s1010, the second electronic device receives first information sent by the first electronic device, wherein the first information comprises information related to text content.
S1020, the second electronic device plays the voice corresponding to the text content according to the first information.
Optionally, in some embodiments, the first information comprises at least one of:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
According to the scheme provided by the application, the second electronic equipment can play the voice corresponding to the text content according to the received first information. Because the second electronic device can play the text content of the first electronic device in the form of audio, the connection between different media and cross devices can be realized, and the user experience is improved.
Optionally, in some embodiments, the method 1000 further comprises:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the second electronic equipment receives first information sent by the first electronic equipment, and the method comprises the following steps:
the second electronic equipment receives the encrypted first information sent by the first electronic equipment;
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
According to the scheme, the first information can be safely transmitted by encrypting the first information, so that the safety of information transmission can be ensured.
In the embodiment of the application, the second electronic device plays the voice corresponding to the text content according to the first information, and the playing can be achieved in various ways.
The method I comprises the following steps:
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic device receiving the text content;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
The second method comprises the following steps:
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic equipment receives voice sent by a cloud server of the second electronic equipment;
and the second electronic equipment plays the voice.
For the above two ways, reference may be made to the descriptions in the methods 600 and 700, which are not repeated herein.
Further, in some embodiments, the method 1000 further comprises:
when the distance between the first electronic device and the second electronic device is smaller than or equal to a first threshold value again, the second electronic device sends third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the second electronic equipment receives the identity information of the first electronic equipment sent by the first electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
Optionally, in some embodiments, the determining, by the second electronic device, whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device includes:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to a checked result sent by a cloud server of the second electronic equipment.
Optionally, in some embodiments, the determining, by the second electronic device, whether to extract the first information according to a check result sent by a cloud server of the second electronic device includes:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or a combination thereof,
if the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
and the second electronic equipment sends the first information to the first electronic equipment.
For the above, reference may be made to the description of the method 800, which is not repeated herein.
FIG. 11 shows a schematic flow diagram of yet another method 1100 of text continuation across devices.
As shown in fig. 11, the method 1100 may include:
s1110, when the interface of the first electronic device displays text content and the distance between the first electronic device and the second electronic device is smaller than or equal to a first threshold, the first electronic device extracts first information, wherein the first information comprises information related to the text content.
S1120, the first electronic device sends the first information to the second electronic device.
According to the scheme provided by the application, when the interface of the first electronic device displays the text content and the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold, the first electronic device can extract the first information and send the first information to the second electronic device, so that the second electronic device can play the voice corresponding to the text content according to the first information. When the first electronic equipment is close to the second electronic equipment, the second electronic equipment can play the text content of the first electronic equipment in an audio mode, so that connection among different media across equipment can be realized, and the user experience is improved.
Optionally, in some embodiments, the first information comprises at least one of the following information:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
Optionally, in some embodiments, the method 1100 further comprises:
the first electronic device receiving second information, the second information comprising information for encryption;
the first electronic device sends the first information to the second electronic device, including:
the first electronic equipment encrypts the first information by using the second information;
and the first electronic equipment sends the encrypted first information to the second electronic equipment.
Optionally, in some embodiments, the method 1100 further comprises:
the first electronic equipment receives third information, wherein the third information is used for requesting identity information of the first electronic equipment;
and the first electronic equipment sends the identity information of the first electronic equipment to the second electronic equipment.
Optionally, in some embodiments, the method 1100 further comprises:
the first electronic equipment receives the first information;
and the first electronic equipment determines the text position played by the second electronic equipment according to the first information.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 12 shows a possible composition diagram of the electronic device 1200 involved in the above embodiment, as shown in fig. 12, the electronic device 1200 may include: a communication module 1210 and a play module 1220.
The communication module 1210 may be configured to enable the electronic device 1200 to perform the above step S1010 and/or the like, and/or other processes for the techniques described herein.
The play module 1220 may be used to enable the electronic device 1200 to perform the above steps S1020 or S930, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
Fig. 13 shows a schematic diagram of a possible composition of the electronic device 1300 involved in the above embodiment, and as shown in fig. 13, the electronic device 1300 may include: an extraction module 1310 and a communication module 1320.
Among other things, the extraction module 1310 can be used to enable the electronic device 1300 to perform steps S1110 or S910, etc., described above, and/or other processes for the techniques described herein.
The communication module 1320 may be used to enable the electronic device 1300 to perform the above steps S1120 or S920, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The electronic device provided by the embodiment is used for executing the method of the present application, and therefore, the same effect as the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage an action of the electronic device, and for example, may be configured to support the electronic device to execute steps performed by the above units. The memory module may be used to support the electronic device in executing stored program codes and data, etc. The communication module can be used for supporting the communication between the electronic equipment and other equipment.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
In an embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having a structure shown in fig. 1.
Fig. 14 shows another possible composition diagram of the electronic device 1400 according to the above-described embodiment, and as shown in fig. 14, the electronic device 1400 may include a communication unit 1410, an input unit 1420, a processing unit 1430, an output unit (or may also be referred to as a display unit) 1440, a peripheral interface 1450, a storage unit 1460, a power supply 1470, a video decoder 1480, and an audio decoder 1490.
The communication unit 1410 is used to establish a communication channel through which the electronic device 1400 connects to a remote server and to download media data from the remote server. The communication unit 1410 may include a WLAN module, a bluetooth module, an NFC module, a baseband module, and other communication modules, and a Radio Frequency (RF) circuit corresponding to the communication module, and is configured to perform wireless local area network communication, bluetooth communication, NFC communication, infrared communication, and/or cellular communication system communication, such as wideband code division multiple access (W-CDMA) and/or High Speed Downlink Packet Access (HSDPA). The communication module 1410 is used for controlling communication of components in the electronic device and may support direct memory access.
The input unit 1420 may be used to enable user interaction with the electronic device and/or input of information into the electronic device. In the embodiments of the present application, the input unit may be a touch panel, other human-computer interaction interfaces, such as an entity input key and a microphone, or other external information capturing devices, such as a camera.
The processing unit 1430 is a control center of the electronic device, and may connect various parts of the entire electronic device using various interfaces and lines, and perform various functions of the electronic device and/or process data by operating or executing software programs and/or modules stored in the storage unit and calling data stored in the storage unit. The above steps S626, S634, S636, S726, etc. may be implemented by the processing unit 1430.
The output unit 1440 includes, but is not limited to, an image output unit and a sound output unit. The image output unit is used for outputting characters, pictures and/or videos. In the present embodiment, the touch panel used in the input unit 1420 can also be used as the display panel of the output unit 1440. For example, when the touch panel detects a gesture operation of touch or proximity thereon, the gesture operation is transmitted to the processing unit to determine the type of the touch event, and then the processing unit provides a corresponding visual output on the display panel according to the type of the touch event. Although in fig. 14, the input unit 1420 and the output unit 1440 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel may be integrated with the display panel to implement the input and output functions of the electronic device. For example, the image output unit may display various graphical user interfaces as virtual control elements, including but not limited to windows, scroll bars, icons, and scrapbooks, for a user to operate in a touch manner.
The final positioning position in step S834 in the above embodiment can be realized by the output unit 1440.
The storage unit 1460 may be used to store software programs and modules, and the processing unit executes various functional applications of the electronic device and implements data processing by operating the software programs and modules stored in the storage unit.
The audio decoder 1490 can decode the audio file to obtain audio data for text continuation.
The present application also provides a system including the electronic device 1200 and the electronic device 1300 described above.
The present embodiment also provides a computer storage medium, where a computer instruction is stored, and when the computer instruction runs on an electronic device, the electronic device is caused to execute the relevant method steps to implement the method for text continuation across devices in the foregoing embodiments.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the method for text continuation across devices in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the method for text continuation across devices in the above-mentioned method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, or portions of the technical solutions that substantially contribute to the prior art, or all or portions of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A method of text continuation across devices, comprising:
when an interface of first electronic equipment displays text content and the distance between the first electronic equipment and second electronic equipment is smaller than or equal to a first threshold value, the first electronic equipment extracts first information, wherein the first information comprises information related to the text content;
the first electronic equipment sends the first information to the second electronic equipment;
and the second electronic equipment plays the voice corresponding to the text content according to the first information.
2. The method of claim 1, wherein the first information comprises at least one of:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the first electronic device sends the first information to the second electronic device, including:
the first electronic equipment encrypts the first information by using the second information;
the first electronic equipment sends the encrypted first information to the second electronic equipment;
the method further comprises the following steps:
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
4. The method according to any one of claims 1 to 3, wherein the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic device sends the text content to the second electronic device;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
5. The method according to any one of claims 1 to 3, further comprising:
the second electronic equipment sends the first information to a cloud server of the second electronic equipment;
the second electronic device plays the voice corresponding to the text content according to the first information, and the method comprises the following steps:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the cloud server of the second electronic equipment extracts target text content from the text content according to the current offset position in the first information;
the cloud server of the second electronic equipment converts the target text content into voice;
the cloud server of the second electronic device sends the voice to the second electronic device;
and the second electronic equipment plays the voice.
6. The method of claim 5, further comprising:
the cloud server of the second electronic device caches the voice.
7. The method according to any one of claims 1 to 6, further comprising:
when the distance between the first electronic device and the second electronic device is smaller than or equal to the first threshold value again, the second electronic device sends third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the first electronic equipment sends identity information of the first electronic equipment to the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
8. The method of claim 7, wherein the second electronic device determining whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device comprises:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
the cloud server of the second electronic device verifies the identity information of the first electronic device and the identity information of the second electronic device;
the cloud server of the second electronic device sends the verified result to the second electronic device;
and the second electronic equipment determines whether to extract the first information according to the checked result.
9. The method of claim 8, wherein determining, by the second electronic device, whether to extract the first information according to the checked result comprises:
if the checked result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or, alternatively,
if the verified result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
the second electronic equipment sends the first information to the first electronic equipment;
the first electronic equipment receives the first information;
and the first electronic equipment determines the text position played by the second electronic equipment according to the first information.
10. A method of text continuation across devices, comprising:
the second electronic equipment receives first information sent by the first electronic equipment, wherein the first information comprises information related to text content;
and the second electronic equipment plays the voice corresponding to the text content according to the first information.
11. The method of claim 10, wherein the first information comprises at least one of:
a source address of the text content, a current offset location of the text content, information of a user who accesses the text content.
12. The method according to claim 10 or 11, characterized in that the method further comprises:
the second electronic equipment sends second information to the first electronic equipment, wherein the second information comprises information used for encryption;
the second electronic device receives first information sent by the first electronic device, and the method comprises the following steps:
the second electronic equipment receives the encrypted first information sent by the first electronic equipment;
and the second electronic equipment decrypts the encrypted first information to obtain the first information.
13. The method according to any one of claims 10 to 12, wherein the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic device receiving the text content;
the second electronic equipment extracts target text content from the text content according to the current offset position of the text content in the first information;
the second electronic equipment converts the target text content into voice;
and the second electronic equipment plays the voice.
14. The method according to any one of claims 10 to 12, wherein the playing, by the second electronic device, the voice corresponding to the text content according to the first information includes:
the second electronic equipment requests the text content from a cloud server of the second electronic equipment according to the source address of the text content in the first information and the information of the user accessing the text content;
the second electronic equipment receives voice sent by a cloud server of the second electronic equipment;
and the second electronic equipment plays the voice.
15. The method according to any one of claims 10 to 14, further comprising:
when the distance between the first electronic device and the second electronic device is smaller than or equal to a first threshold value again, the second electronic device sends third information to the first electronic device, wherein the third information is used for requesting identity information of the first electronic device;
the second electronic equipment receives the identity information of the first electronic equipment sent by the first electronic equipment;
and the second electronic equipment determines whether to extract the first information according to the identity information of the first electronic equipment and the identity information of the second electronic equipment.
16. The method of claim 15, wherein the second electronic device determining whether to extract the first information according to the identity information of the first electronic device and the identity information of the second electronic device comprises:
the second electronic equipment sends the identity information of the first electronic equipment and the identity information of the second electronic equipment to a cloud server of the second electronic equipment;
and the second electronic equipment determines whether to extract the first information according to a verified result sent by a cloud server of the second electronic equipment.
17. The method of claim 16, wherein the determining, by the second electronic device, whether to extract the first information according to the verification result sent by the cloud server of the second electronic device comprises:
if the verified result is that the identity information of the first electronic device is not matched with the identity information of the second electronic device, the second electronic device determines not to extract the first information; or the like, or, alternatively,
if the checked result is that the identity information of the first electronic device is matched with the identity information of the second electronic device, the second electronic device determines to extract the first information;
the method further comprises the following steps:
and the second electronic equipment sends the first information to the first electronic equipment.
18. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of claims 10-17.
19. A chip system, characterized in that the chip system comprises at least one processor, which when executed in the at least one processor causes the functionality of the method according to any one of claims 1 to 9 or 10 to 17 on the electronic device to be implemented.
20. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-9 or 10-17.
21. A computer program product, which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 9 or 10 to 17.
CN202110539423.0A 2021-05-18 2021-05-18 Cross-equipment text connection method and electronic equipment Active CN115379043B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110539423.0A CN115379043B (en) 2021-05-18 2021-05-18 Cross-equipment text connection method and electronic equipment
PCT/CN2022/085233 WO2022242343A1 (en) 2021-05-18 2022-04-06 Cross-device text continuity method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110539423.0A CN115379043B (en) 2021-05-18 2021-05-18 Cross-equipment text connection method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115379043A true CN115379043A (en) 2022-11-22
CN115379043B CN115379043B (en) 2024-06-04

Family

ID=84059162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110539423.0A Active CN115379043B (en) 2021-05-18 2021-05-18 Cross-equipment text connection method and electronic equipment

Country Status (2)

Country Link
CN (1) CN115379043B (en)
WO (1) WO2022242343A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102006A (en) * 2022-11-28 2024-05-28 成都欧珀通信科技有限公司 Service circulation method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765385A (en) * 2011-05-09 2014-04-30 谷歌公司 Transferring application state across devices
CN108242233A (en) * 2016-12-26 2018-07-03 腾讯科技(深圳)有限公司 The playing method and device of audio data
CN110012103A (en) * 2019-04-11 2019-07-12 芋头科技(杭州)有限公司 Control method, device and the controller and medium of smart machine
CN110521192A (en) * 2017-04-28 2019-11-29 三星电子株式会社 Electronic equipment and its close to discovery method
CN112188362A (en) * 2020-09-29 2021-01-05 歌尔科技有限公司 Playing method, device and computer readable storage medium
CN112351412A (en) * 2019-08-06 2021-02-09 华为技术有限公司 Content connection method, system and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11599328B2 (en) * 2015-05-26 2023-03-07 Disney Enterprises, Inc. Methods and systems for playing an audio corresponding to a text medium
CN109660842B (en) * 2018-11-14 2021-06-15 华为技术有限公司 Method for playing multimedia data and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765385A (en) * 2011-05-09 2014-04-30 谷歌公司 Transferring application state across devices
CN108242233A (en) * 2016-12-26 2018-07-03 腾讯科技(深圳)有限公司 The playing method and device of audio data
CN110521192A (en) * 2017-04-28 2019-11-29 三星电子株式会社 Electronic equipment and its close to discovery method
CN110012103A (en) * 2019-04-11 2019-07-12 芋头科技(杭州)有限公司 Control method, device and the controller and medium of smart machine
CN112351412A (en) * 2019-08-06 2021-02-09 华为技术有限公司 Content connection method, system and electronic equipment
WO2021023220A1 (en) * 2019-08-06 2021-02-11 华为技术有限公司 Content continuation method and system, and electronic device
CN112188362A (en) * 2020-09-29 2021-01-05 歌尔科技有限公司 Playing method, device and computer readable storage medium

Also Published As

Publication number Publication date
WO2022242343A1 (en) 2022-11-24
CN115379043B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN112291764B (en) Content connection system
CN108833963B (en) Method, computer device, readable storage medium and system for displaying interface picture
CN110290146B (en) Method and device for generating shared password, server and storage medium
CN111628916B (en) Method for cooperation of intelligent sound box and electronic equipment
KR102105520B1 (en) Apparatas and method for conducting a display link function in an electronic device
CN113259301B (en) Account data sharing method and electronic equipment
CN112527174B (en) Information processing method and electronic equipment
CN110752929B (en) Application program processing method and related product
CN115039378A (en) Audio output method and terminal equipment
EP3989113A1 (en) Facial image transmission method, numerical value transfer method and apparatus, and electronic device
US20230254143A1 (en) Method for Saving Ciphertext and Apparatus
CN112398855A (en) Method and device for transferring application contents across devices and electronic device
CN111061524A (en) Application data processing method and related device
CN114722377A (en) Method, electronic device and system for authorization by using other devices
US20240095408A1 (en) Data protection method and system, medium, and electronic device
WO2022135157A1 (en) Page display method and apparatus, and electronic device and readable storage medium
CN115379043B (en) Cross-equipment text connection method and electronic equipment
CN113590346B (en) Method and electronic equipment for processing service request
CN114449200B (en) Audio and video call method and device and terminal equipment
CN114692119A (en) Method for verifying application and electronic equipment
RU2809740C2 (en) Method for processing file stored in external memory
CN115460445B (en) Screen projection method of electronic equipment and electronic equipment
WO2022042273A1 (en) Key using method and related product
CN114793288B (en) Authority information processing method, device, server and medium
CN117631950A (en) Split screen display method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant