CN111741394A - Data processing method and device and readable medium - Google Patents

Data processing method and device and readable medium Download PDF

Info

Publication number
CN111741394A
CN111741394A CN202010508183.3A CN202010508183A CN111741394A CN 111741394 A CN111741394 A CN 111741394A CN 202010508183 A CN202010508183 A CN 202010508183A CN 111741394 A CN111741394 A CN 111741394A
Authority
CN
China
Prior art keywords
data
processing
processing result
target data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010508183.3A
Other languages
Chinese (zh)
Inventor
王颖
张硕
张丹
刘宝
梁宵
杨天府
荣河江
李鹏翀
李建涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Intelligent Technology Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010508183.3A priority Critical patent/CN111741394A/en
Publication of CN111741394A publication Critical patent/CN111741394A/en
Priority to PCT/CN2021/074911 priority patent/WO2021244056A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1033Cables or cables storage, e.g. cable reels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Headphones And Earphones (AREA)

Abstract

The embodiment of the application provides a data processing method, a data processing device and a readable medium. The method comprises the following steps: the earphone collects target data, the target data is sent to the earphone receiving device for the earphone receiving device to process the target data to obtain a processing result, and sends the processing result to the earphone, and/or stores the processing result, and/or displays the processing result, and/or plays the processing result, or sending the target data to the cloud server through a mobile communication chip arranged in the earphone so as to be processed by the cloud server to obtain a processing result, and the processing result is sent to the earphone and/or the processing result is stored, so that the earphone storage device or the cloud server with stronger computing power can process the target data, the earphone can utilize the computing, storing, displaying, playing and other capabilities of the earphone storage device or the cloud server, and the problem that the function with higher requirement on computing power cannot be realized by the earphone is solved.

Description

Data processing method and device and readable medium
Technical Field
The present application relates to the field of wireless headset technology, and in particular, to a data processing method, a data processing apparatus, an apparatus for data processing, and a machine-readable medium.
Background
With the development of Wireless headset technology, True Wireless headsets (TWS) are becoming the second choice for consumers. The left earphone body and the right earphone body of the real wireless earphone are completely separated, and an exposed wire cannot be seen, so that the real wireless earphone is a real wireless earphone. Compared with the traditional wireless earphone, the connection of the real wireless earphone is not only the signal transmission between the earphone and the signal transmitting equipment, but also the wireless connection between the main earphone and the auxiliary earphone. The case of the wireless headset exists only as an accessory for the housing and charging of the headset.
Because the volume of the wireless earphone cannot be too large, the wireless earphone cannot have strong computing power, and therefore, the function with high requirement on computing power cannot be realized.
Disclosure of Invention
In view of the above problems, embodiments of the present application provide a data processing method, a data processing apparatus, an apparatus for data processing, and a machine-readable medium, which overcome or at least partially solve the above problems, and can solve the problem that a function requiring high computing power cannot be implemented by relying on a headset.
In order to solve the above problem, the present application discloses a data processing method applied to an earphone, including:
collecting target data;
sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
Optionally, after the sending the target data to a headset storage device, the method further includes:
and receiving the processing result sent by the earphone accommodating device.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
and sending the processing result to the mobile terminal.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
and playing the processing result.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
storing the processing result on a storage medium on the headset.
Optionally, the target data or the processing result is transmitted between the headset and the headset storage device through bluetooth.
Optionally, the acquiring target data comprises at least one of:
when the target data comprises audio data, acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate data are collected through a heart rate sensor arranged on the earphone.
Optionally, the processing result includes at least one of: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
Optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information.
Optionally, when the target data includes acceleration data, the processing result includes a motion state of the user.
The embodiment of the application also discloses a data processing method, which is applied to the earphone accommodating device and comprises the following steps:
acquiring target data;
processing the target data to obtain a processing result;
sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
Optionally, the acquiring target data includes:
receiving the target data from the headset.
Optionally, the target data includes audio data, and the acquiring the target data includes:
the audio data is collected through a microphone array arranged on the earphone receiving device.
Optionally, the processing the target data to obtain a processing result includes:
detecting the source direction and/or type of the environmental sound according to the audio data;
and generating prompt information according to the source direction and/or the type of the environmental sound.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
carrying out voice processing on the audio data to obtain voice-processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
Optionally, the target data includes audio data, and the processing result includes a sound recording file.
Optionally, the processing the target data to obtain a processing result includes:
and marking target audio in the audio data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
converting the audio data into the text data.
Optionally, the processing the target data to obtain a processing result further includes:
identifying preset type target information in the text data;
and generating memo information or reminding information according to the target information.
Optionally, the target data includes text data, and the processing the target data to obtain a processing result includes:
and generating voice synthesis data according to the text data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
and translating to obtain target audio data or target text data of the target language according to the audio data.
Optionally, the target data includes temperature data and/or heart rate data, and the processing the target data to obtain a processing result includes:
and generating body state information according to the temperature data and/or the heart rate data.
Optionally, the target data includes acceleration data, and the processing the target data to obtain a processing result includes:
and identifying the motion state of the user according to the acceleration data.
Optionally, the method further comprises:
determining a target process associated with the motion state;
the target process is executed.
Optionally, the target data includes acceleration data and position data, and the performing target processing on the target data to obtain a processing result includes:
and generating navigation prompt information according to the acceleration data and the position data.
Optionally, the target data includes audio data, and the performing target processing on the target data to obtain a processing result includes:
and performing voice processing on the audio data based on a voice processing function to obtain a voice processing result.
Optionally, the performing target processing on the target data to obtain a processing result includes:
sending the target data to a cloud server;
and receiving a processing result obtained by processing the target data by the cloud server.
Optionally, the method further comprises:
detecting whether the target data meets preset requirements or user settings;
if the target data meets the preset requirements or user settings, executing the step of sending the target data to a cloud server;
and if the target data do not meet the preset requirements or the user setting, processing the target data by the earphone accommodating device.
Optionally, the detecting whether the target data meets a preset requirement includes:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
Optionally, the headset storage device is connected to the internet through a mobile communication network or a wireless local area network.
The embodiment of the present application further discloses a data processing apparatus, which is applied to an earphone, and includes:
the data acquisition module is used for acquiring target data;
the data sending module is used for sending the target data to an earphone accommodating device so that the earphone accommodating device can process the target data to obtain a processing result, and sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
Optionally, the apparatus further comprises:
and the result receiving module is used for receiving the processing result sent by the earphone accommodating device after the target data is sent to the earphone accommodating device.
Optionally, the apparatus further comprises:
and the result sending module is used for sending the processing result to the mobile terminal after receiving the processing result sent by the earphone accommodating device.
Optionally, the apparatus further comprises:
and the result playing module is used for playing the processing result after receiving the processing result sent by the earphone accommodating device.
Optionally, the apparatus further comprises:
a result storage module, configured to store the processing result on a storage medium on the headset after receiving the processing result sent by the headset storing device.
Optionally, the target data or the processing result is transmitted between the headset and the headset storage device through bluetooth.
Optionally, the data acquisition module comprises at least one of:
when the target data comprises audio data, an audio acquisition sub-module is used for acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, an acceleration acquisition submodule is used for acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, the temperature acquisition submodule is used for acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate acquisition submodule is used for acquiring the heart rate data through a heart rate sensor arranged on the earphone.
Optionally, the processing result includes at least one of: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
Optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information.
Optionally, when the target data includes acceleration data, the processing result includes a motion state of the user.
The embodiment of the application also discloses a data processing device, is applied to earphone storage device, includes:
the data acquisition module is used for acquiring target data;
the data processing module is used for processing the target data to obtain a processing result;
and the result processing module is used for sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
Optionally, the data obtaining module includes:
and the data receiving submodule is used for receiving the target data from the earphone.
Optionally, the target data includes audio data, and the data obtaining module includes:
and the data acquisition submodule is used for acquiring the audio data through a microphone array arranged on the earphone receiving device.
Optionally, the data processing module includes:
the environment sound detection submodule is used for detecting the source direction and/or the type of the environment sound according to the audio data;
and the prompt generation submodule is used for generating prompt information according to the source direction and/or the type of the environment sound.
Optionally, the data processing module includes:
and the noise reduction sub-module is used for performing noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
Optionally, the data processing module includes:
and the echo cancellation submodule is used for carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
Optionally, the target data includes audio data, and the data processing module includes:
the voice effect submodule is used for carrying out voice processing on the audio data to obtain voice processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
Optionally, the target data includes audio data, and the processing result includes a sound recording file.
Optionally, the data processing module includes:
and the marking sub-module is used for marking the target audio in the audio data.
Optionally, the target data includes audio data, and the data processing module includes:
and the text conversion submodule is used for converting the audio data into the text data.
Optionally, the data processing module further includes:
the information identification submodule is used for identifying preset type target information in the text data;
and the memo reminding generation submodule is used for generating memo information or reminding information according to the target information.
Optionally, the target data includes text data, and the data processing module includes:
and the voice generation submodule is used for generating voice synthesis data according to the text data.
Optionally, the target data includes audio data, and the data processing module includes:
and the translation submodule is used for translating the audio data to obtain target audio data or target text data of the target language.
Optionally, the target data comprises temperature data and/or heart rate data, and the data processing module comprises:
and the body state generating submodule is used for generating body state information according to the temperature data and/or the heart rate data.
Optionally, the target data includes acceleration data, and the data processing module includes:
and the motion state identification submodule is used for identifying the motion state of the user according to the acceleration data.
Optionally, the apparatus further comprises:
a process determination module for determining a target process associated with the motion state;
and the processing execution module is used for executing the target processing.
Optionally, the target data includes acceleration data and position data, and the data processing module includes:
and the navigation generation submodule is used for generating navigation prompt information according to the acceleration data and the position data.
Optionally, the target data includes audio data, and the data processing module includes:
and the voice processing submodule is used for carrying out voice processing on the audio data based on the voice processing function to obtain a voice processing result.
Optionally, the data processing module includes:
the data sending submodule is used for sending the target data to a cloud server;
and the result receiving submodule is used for receiving a processing result obtained by processing the target data by the cloud server.
Optionally, the apparatus further comprises:
the requirement detection module is used for detecting whether the target data meets preset requirements or user settings;
the cloud execution module is used for executing the step of sending the target data to a cloud server if the target data meets the preset requirements or user settings;
and the storage device execution module is used for processing the target data by the earphone storage device if the target data does not meet the preset requirements or user settings.
Optionally, the requirement detection module is specifically configured to:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
Optionally, the headset storage device is connected to the internet through a mobile communication network or a wireless local area network.
The embodiment of the application also discloses a device for data processing, which comprises a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for:
collecting target data;
sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
Optionally, after the sending the target data to a headset storing device, the instructions of the operation further include:
and receiving the processing result sent by the earphone accommodating device.
Optionally, after the receiving the processing result sent by the headset storing device, the operating instructions further include:
and sending the processing result to the mobile terminal.
Optionally, after the receiving the processing result sent by the headset storing device, the operating instructions further include:
and playing the processing result.
Optionally, after the receiving the processing result sent by the headset storing device, the operating instructions further include:
storing the processing result on a storage medium on the headset.
Optionally, the target data or the processing result is transmitted between the headset and the headset storage device through bluetooth.
Optionally, the acquiring target data comprises at least one of:
when the target data comprises audio data, acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate data are collected through a heart rate sensor arranged on the earphone.
Optionally, the processing result includes at least one of: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
Optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information.
Optionally, when the target data includes acceleration data, the processing result includes a motion state of the user.
The embodiment of the application also discloses a device for data processing, which comprises a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for:
acquiring target data;
processing the target data to obtain a processing result;
sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
Optionally, the acquiring target data includes:
receiving the target data from the headset.
Optionally, the target data includes audio data, and the acquiring the target data includes:
the audio data is collected through a microphone array arranged on the earphone receiving device.
Optionally, the processing result includes prompt information, and the processing the target data to obtain the processing result includes:
detecting the source direction and/or type of the environmental sound according to the audio data;
and generating prompt information according to the source direction and/or the type of the environmental sound.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
carrying out voice processing on the audio data to obtain voice-processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
Optionally, the target data includes audio data, and the processing result includes a sound recording file.
Optionally, the processing the target data to obtain a processing result includes:
and marking target audio in the audio data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
converting the audio data into the text data.
Optionally, the processing the target data to obtain a processing result further includes:
identifying preset type target information in the text data;
and generating memo information or reminding information according to the target information.
Optionally, the target data includes text data, and the processing the target data to obtain a processing result includes:
and generating voice synthesis data according to the text data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
and translating to obtain target audio data or target text data of the target language according to the audio data.
Optionally, the target data includes temperature data and/or heart rate data, and the processing the target data to obtain a processing result includes:
and generating body state information according to the temperature data and/or the heart rate data.
Optionally, the target data includes acceleration data, and the processing the target data to obtain a processing result includes:
and identifying the motion state of the user according to the acceleration data.
Optionally, the instructions of the operations further comprise:
determining a target process associated with the motion state;
the target process is executed.
Optionally, the target data includes acceleration data and position data, and the performing target processing on the target data to obtain a processing result includes:
and generating navigation prompt information according to the acceleration data and the position data.
Optionally, the target data includes audio data, and the performing target processing on the target data to obtain a processing result includes:
and performing voice processing on the audio data based on a voice processing function to obtain a voice processing result.
Optionally, the performing target processing on the target data to obtain a processing result includes:
sending the target data to a cloud server;
and receiving a processing result obtained by processing the target data by the cloud server.
Optionally, the instructions of the operations further comprise:
detecting whether the target data meets preset requirements or user settings;
if the target data meets the preset requirements or user settings, executing the step of sending the target data to a cloud server;
and if the target data do not meet the preset requirements or the user setting, processing the target data by the earphone accommodating device.
Optionally, the detecting whether the target data meets a preset requirement includes:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
Optionally, the headset storage device is connected to the internet through a mobile communication network or a wireless local area network.
The embodiment of the application also discloses a machine readable medium, wherein instructions are stored on the machine readable medium, and when the instructions are executed by one or more processors, the device is caused to execute the data processing method.
The embodiment of the application has the following advantages:
in summary, according to the embodiments of the present application, the earphone is used to collect the target data, and send the target data to the earphone storage device, so that the earphone storage device processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, and/or displays the processing result, and/or plays the processing result, or sends the target data to the cloud server through the mobile communication chip built in the earphone, so that the cloud server processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, so that the earphone storage device or the cloud server with stronger computing capability can process the target data, and the earphone can utilize the computing, storing, and/or displaying of the earphone storage device or the cloud server, And/or playing and the like, and the problem that functions with higher requirements on computing power cannot be realized by depending on earphones is solved.
Drawings
FIG. 1 shows a flow chart of the steps of an embodiment of a data processing method of the present application;
FIG. 2 shows a flow chart of steps of another data processing method embodiment of the present application;
FIG. 3 shows a flow chart of steps of yet another data processing method embodiment of the present application;
FIG. 4 is a flow chart illustrating the steps of yet another data processing method embodiment of the present application;
FIG. 5 is a block diagram illustrating an embodiment of a data processing apparatus of the present application;
FIG. 6 shows a block diagram of another data processing apparatus embodiment of the present application;
FIG. 7 is a block diagram illustrating an apparatus for data processing in accordance with an exemplary embodiment;
FIG. 8 is a block diagram of a server in some embodiments of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a data processing method according to the present application is shown, and is applied to a headset, where the method specifically includes the following steps:
step 101, collecting target data.
In this embodiment of the present application, the earphone may also collect data in addition to playing sound, for example, a microphone disposed on the earphone may collect audio data, an acceleration sensor disposed on the earphone may collect acceleration data, and the like, or any other suitable data, which is not limited in this embodiment of the present application. And recording one or more data collected by the earphone as target data.
In the embodiment of the present application, optionally, the implementation method for acquiring the target data includes multiple types, for example, when the target data includes audio data, the audio data may be acquired through a microphone disposed on the headset; when the target data comprises acceleration data, acquiring the acceleration data through an acceleration sensor arranged on the earphone; when the target data comprises temperature data, acquiring the temperature data through a temperature sensor arranged on the earphone; when the target data comprise heart rate data, the heart rate data are collected through a heart rate sensor arranged on the earphone. The target data may be collected by any suitable method, which is not limited in the embodiment of the present application.
For example, when a user uses an earphone to make a call with another person, the earphone collects the call spoken by the user through a microphone.
102, sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, and sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
In the embodiment of the present application, the earphone storage device no longer only has auxiliary capacity for storage and/or charging, but has a certain computing capacity, and various processing capacities on target data can be specifically realized by combining hardware and software of the earphone storage device. For example, the earphone receiving device may beautify or change the voice of the user, may convert the words spoken by the user into text, may translate the words spoken by the user into audio or text in other languages, or any other suitable functions, which is not limited in this embodiment of the application.
In this application embodiment, after the target data is collected by the earphone, the target data can be sent to the earphone storage device so that the earphone storage device can process the target data to obtain a processing result. The processing of the target data may include one or more of processing, for example, when the user uses the headset to make a phone call with another person, the headset may transmit the collected audio data to the headset storage device, and the headset storage device may beautify or otherwise change the voice in the audio data, and may translate the voice spoken by the user in the audio data into a text in a language selected by the user, so as to obtain a translation result of the text type.
In the embodiment of the present application, the earphone accommodating device may send the processing result to the earphone, and/or store the processing result, and/or display the processing result, and/or play the processing result, and the earphone accommodating device may perform one or more of the above operations on the processing result. Of course, it can store the processing result only when the earphone storage device has a storage space, it can display the processing result only when the earphone storage device has a display, and it can play the processing result only when the earphone storage device has a speaker.
For example, when a user uses the earphone to make a call with another person, the earphone receiving device sends voice beautified or other changed audio data to the earphone, the earphone sends the voice beautified or other changed audio data to a connected mobile phone, the mobile phone completes the call with the other person, meanwhile, the earphone receiving device can translate the speech spoken by the user into a text of a certain language selected by the user, the translation result of the text type is stored in a storage medium on the earphone receiving device, if the earphone receiving device is provided with a display, the earphone receiving device can display the translation result of the text type, if the earphone receiving device is provided with a loudspeaker, the earphone receiving device can translate the speech spoken by the user into the audio of the certain language selected by the user, and the translation result of the audio type is played.
In this application embodiment, earphone storage device can also utilize the high in the clouds server to realize above-mentioned function, for example, earphone storage device sends audio data to the high in the clouds server again, accomplishes the translation back by the high in the clouds server, sends the translation result to earphone storage device, and earphone storage device sends the earphone again.
In this embodiment of the application, optionally, the target data or the processing result is transmitted between the headset and the headset receiving device through bluetooth. The earphone and the earphone containing device are both provided with Bluetooth chips, and connection transmission target data or processing results can be established through Bluetooth.
When the earphone is connected with the mobile terminal or other electronic equipment through the Bluetooth, the common Bluetooth earphone is only provided with one group of Bluetooth chips, and the earphone can transmit data with the mobile terminal or other electronic equipment through the Bluetooth. For realizing the technical scheme of this application, still need to connect earphone storage device through the bluetooth promptly, bluetooth headset can possess two sets of bluetooth chips, and wherein a set of bluetooth chip is used for with mobile terminal or other electronic equipment between the transmission data, another group bluetooth chip be used for with earphone storage device between the transmission target data or the processing result.
In The embodiment of The present application, The headset may have a built-in Mobile Communication chip, for example, a 4G (The 4th generation Mobile Communication Technology, fourth generation Mobile Communication Technology) chip, and a 5G (5th generation Mobile Networks, fifth generation Mobile Communication Technology) chip. The earphone can send the target data to the cloud server through the built-in mobile communication chip. The cloud server can provide stronger computing power, so that more complex processing can be performed on target data, more complex functions can be realized, the power consumption of the earphone storage device is reduced, the processing speed is increased, and the like.
In the embodiment of the application, the cloud server can process the target data to obtain a processing result. The processing of the target data may include one or more of the above, for example, when the user uses the headset to make a phone call with another person, the headset sends the collected audio data to the cloud server, and the cloud server may beautify or otherwise change the voice in the audio data, and may translate the voice spoken by the user in the audio data into a text in a language selected by the user, so as to obtain a translation result of the text type. Any suitable processing may be specifically included, and the present application is not limited to this.
In this embodiment, the cloud server may send the processing result to the headset, and/or store the processing result, and the cloud server may perform one or more of the above operations on the processing result.
In the embodiment of the present application, optionally, the processing result includes at least one of the following: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
After the audio data is collected by the earphone, the audio data is received by the earphone storage device or the cloud server, noise reduction processing is carried out on the audio data, the audio data after the noise reduction processing is obtained, or echo cancellation processing can be carried out on the audio data, the audio data after the echo cancellation processing is obtained, or the audio data can be subjected to voice processing, the audio data after the voice processing is obtained, or the audio data can be subjected to voice processing, and the audio data after the voice processing is obtained.
The human voice processing may include adjusting a pitch and a timbre, changing a sound to be soft or sharp or magnetic, or changing a sound to be a sound of another person, so as to beautify or otherwise change a human voice, or any other suitable human voice processing, which is not limited in this embodiment of the application. The sound effect processing may include inserting sound effects such as applause, music, laughter, animal sounds, etc. into the audio data, or may process the audio data through a digital sound effect processor so that the sound sounds with different spatial characteristics, such as a hall, an opera house, a cinema, a karst cave, a stadium, etc. The ambient sound effect is mainly obtained by processing the sound through ambient filtering, ambient displacement, ambient reflection, ambient transition and the like, so that a listener feels like being placed in different environments or any other suitable sound effect processing, and the embodiment of the application does not limit the processing. The Speech synthesis data may include artificial Speech generated electronically, for example, by TTS (Text To Speech) technology, and may be available in a variety of timbres, and may adjust Speech rate, intonation, volume, etc. For example, a telephone number of a handset is transmitted to an earphone, speech synthesis data is synthesized by TTS technology by an earphone storage device, the speech synthesis data is transmitted to the earphone, and the telephone number is broadcasted by the earphone.
The earphone can send the audio data to the earphone storage device or the cloud server, and the earphone storage device or the cloud server generates a recording file according to the audio data and then stores the recording file on the earphone storage device or the cloud server. The recording includes telephone recording, voice memo, or any other suitable recording method, which is not limited in the embodiment of the present application. The processing result may also include a marker for the target audio in the audio data. When the target audio in the call is marked, the target audio needs to be stored and marked, the target audio can be marked as important content, and the target audio can be marked as other labels, specifically including any applicable label, which is not limited in the embodiment of the present application.
The earphone storage device or the cloud server can perform text conversion according to the audio data to obtain text data, or perform translation according to the audio data to obtain target audio data or target text data of a target language. The earphone storage device or the cloud server can also identify text data, extract information such as time, place, task and event, and automatically convert the identified content into information formats such as memo information and reminding information to serve as a processing result. The translation processing function includes telephone translation, dialogue translation, simultaneous interpretation, and the like, or any other suitable translation processing, which is not limited in the embodiment of the present application.
The voice processing includes a processing procedure of recognizing and understanding the audio data and making corresponding feedback, including converting the audio data into corresponding text or command, recognizing the voice information in the audio data, and making corresponding feedback according to understanding, or any other suitable processing, which is not limited in this embodiment of the present application.
In this embodiment, the speech processing function includes an algorithm, a database, a computing resource, and the like, which are called by processing speech in the audio data, or any other applicable content related to speech processing, and this is not limited in this embodiment of the present application. For example, the audio data is "what ever and tomorrow weather is," and the voice processing function is used for performing voice processing on "what tomorrow weather is" to obtain a voice processing result "tomorrow weather is clear and air temperature is 27 degrees," and the result is played on an earphone.
For example, one speech processing function is to call only local computing resources, and recognize speech in audio by using a local speech recognition model, where the speech recognition model stores speech features extracted from audio collected in advance, the recognized speech is relatively limited to the speech features in the local speech model, and the recognition speed is limited to the local computing resources; the other voice processing function is to utilize the cloud server, upload the audio to the cloud server, call computing resources on the cloud server, utilize the voice recognition model to recognize the voice in the audio, understand the voice in the audio, make corresponding feedback to the voice, no longer limit in local computing resources and sample bank, can have better voice processing effect, obtain more complicated various results.
In this embodiment of the application, optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information. The body state information is used to represent a body state of the user, for example, a temperature value of a body temperature of the user, or prompt tones of the user such as hyperthermia, normothermia, and hypothermia, a numerical value of a heart rate of the user, or prompt tones of the user such as too fast, too slow, and normal heart rate, or any other suitable body state information, which is not limited in this application embodiment.
The earphone storage device or the cloud server can generate body state information according to the temperature data and/or the heart rate data. For example, the average body temperature of the user in one day is calculated according to the temperature data, the average heart rate of the user in one day is calculated according to the heart rate data, the average body temperature and the average heart rate are used as the physical state information, or according to a temperature interval in which the temperature data is located, it is determined that the user is hyperthermia, normothermia or hypothermia as the physical state information, or according to a heart rate interval in which the heart rate data is located, it is determined that the user is too fast, normothermia or too slow as the physical state information, or any other suitable manner is used to generate the physical state information, which is not limited in the embodiment of the present application.
In this embodiment of the present application, optionally, when the target data includes acceleration data, the processing result includes a motion state of the user. The motion state of the user is used to represent the motion performed by the user, for example, the head motion performed by the user is nodding, shaking, lowering, or raising, or the user is walking or running, or any other suitable motion state, which is not limited in this embodiment of the application.
In the embodiment of the application, the acceleration sensor built in the earphone is used, and the algorithm for identifying the motion state of the user is embedded in the earphone, so that the earphone has the capability of identifying the motion state of the user. The implementation manner of identifying the motion state of the user may include multiple manners, for example, after the acceleration sensor collects the electric signal, the electric signal is matched with the electric signals of multiple motion states, and if the electric signal is matched with the electric signal of the target motion state, the current motion state of the user is identified as the target motion state; or after the acceleration sensor collects the electric signal, the electric signal is converted into acceleration information in the form of the magnitude and the direction of the acceleration, and the current motion state of the user is determined according to the acceleration information.
In summary, according to the embodiments of the present application, the earphone is used to collect the target data, and send the target data to the earphone storage device, so that the earphone storage device processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, and/or displays the processing result, and/or plays the processing result, or sends the target data to the cloud server through the mobile communication chip built in the earphone, so that the cloud server processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, so that the earphone storage device or the cloud server with stronger computing capability can process the target data, and the earphone can utilize the computing, storing, and/or displaying of the earphone storage device or the cloud server, And/or playing and the like, and the problem that functions with higher requirements on computing power cannot be realized by depending on earphones is solved.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of the data processing method of the present application is shown, and is applied to a headset, where the method specifically includes the following steps:
step 201, collecting target data.
Step 202, sending the target data to an earphone storage device for the earphone storage device to process the target data to obtain a processing result, and sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result.
Step 203, receiving the processing result sent by the earphone accommodating device.
In the embodiment of the application, the earphone storage device sends the processing result to the earphone, and the earphone receives the sent processing result, so that the earphone sends the processing result to the mobile terminal, or plays the processing result, or stores the processing result in a storage medium on the earphone.
In this embodiment of the application, optionally, after receiving the processing result sent by the earphone accommodating apparatus, the method may further include: and the earphone sends the processing result to the mobile terminal.
For example, when a user uses the earphone to make a call with another person, the earphone storage device sends voice beautified or other changed audio data to the earphone, the earphone sends the voice beautified or other changed audio data to the connected mobile phone, and then the mobile phone can send the voice beautified or other changed audio data to the other person, so that the other person can hear the beautified or other changed voice.
In this embodiment of the application, optionally, after receiving the processing result sent by the earphone accommodating apparatus, the method may further include: and the earphone plays the processing result.
For example, the earphone has gathered user's temperature data and rhythm of the heart data, and earphone storage device obtains the too high, the too fast health status information of rhythm of the heart of user according to temperature data and rhythm of the heart data through judging, and earphone storage device sends the too high, the too fast health status information of rhythm of the heart of user for the earphone, and the earphone plays this health status information to reach suggestion user's effect.
In this embodiment of the application, optionally, after receiving the processing result sent by the earphone accommodating apparatus, the method may further include: the processing result is stored on a storage medium on the headset.
When the storage medium is arranged on the earphone, the earphone can store the processing result on the storage medium on the earphone after receiving the processing result, so that the processing result can be conveniently and quickly acquired when the processing result is needed by the subsequent earphone, or the processing result can be acquired when the earphone cannot be in communication connection with the earphone storage device.
In summary, according to the embodiment of the present application, target data is collected by an earphone, the target data is sent to an earphone storage device, so that the earphone storage device processes the target data to obtain a processing result, the processing result is sent to the earphone, and/or the processing result is stored, and/or the processing result is displayed, and/or the processing result is played, and the processing result sent by the earphone storage device is received, so that the earphone storage device with higher computing capability can process the target data, and the earphone can utilize the computing, storing, and/or displaying, and/or playing capabilities of the earphone storage device, and the problem that a function with higher computing capability requirement cannot be realized by the earphone is solved.
Referring to fig. 3, a flowchart illustrating steps of another embodiment of a data processing method according to the present application is shown, and is applied to an earphone storage device, where the method specifically includes the following steps:
step 301, target data is acquired.
In this embodiment of the present application, the obtaining of the target data may be receiving the target data from an earphone, or may be collecting audio data (that is, the target data) by using a microphone array disposed on the earphone receiving device, or any other suitable obtaining manner, which is not limited in this embodiment of the present application.
In this embodiment of the present application, optionally, an implementation manner of obtaining the target data includes: target data is received from the headset. For example, after the target data is collected by the headset, the target data is sent to the headset storage device through the bluetooth.
Step 302, processing the target data to obtain a processing result.
In this embodiment, the earphone receiving device implements various processing capabilities on target data by combining hardware and software, for example, performing voice processing on audio data to beautify or otherwise change the voice, or converting the audio data into text data, or translating the audio data to obtain target audio data or target text data in a target language, or generating body state information according to temperature data and/or heart rate data, or generating an action state of a user according to acceleration data, or any other suitable processing, which is not limited in this embodiment of the present application.
In this embodiment of the present application, optionally, in an implementation manner of processing the target data to obtain a processing result, the processing method may include: sending the target data to a cloud server; and receiving a processing result obtained by processing the target data by the cloud server.
When the target data are processed, the target data can be processed by the cloud server besides the earphone storage device, after the earphone storage device acquires the target data, the target data are sent to the cloud server, the target data are processed by the cloud server, and then the processing result is sent to the earphone storage device by the cloud server. For example, the cloud server translates the audio data to obtain target audio data or target text data in the target language, which may specifically include any suitable processing. The cloud server can provide stronger computing power, so that more complex processing can be performed on target data, more complex functions can be realized, the power consumption of the earphone storage device is reduced, the processing speed is increased, and the like.
In the embodiment of the present application, optionally, the method may further include: detecting whether the target data meets preset requirements or user settings; if the target data meets the preset requirements or user settings, executing the step of sending the target data to a cloud server; and if the target data do not meet the preset requirements or the user setting, processing the target data by the earphone accommodating device.
When earphone storage device and high in the clouds server all can handle the target data, in an implementation, earphone storage device can detect whether target data accords with preset requirement or user setting. The preset requirements include whether the data size of the target data exceeds a set threshold, whether the type of the target data is a preset type, or any other suitable requirements, which is not limited in the embodiment of the present application. The user setting includes whether the data size of the target data exceeds a threshold set by the user, whether the type of the target data is the type set by the user, or any other suitable setting, which is not limited in the embodiment of the present application. By the aid of the method, target data can be flexibly processed in the cloud server or processed by the earphone accommodating device to be controlled, so that computing resources of the earphone accommodating device and the cloud server can be better utilized.
In the embodiment of the present application, optionally, the method may further include: in an implementation manner of detecting whether the target data meets a preset requirement, the method may include: if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement; and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
When earphone storage device and high in the clouds server can all handle the target data, in an implementation, when the data bulk of target data surpassed and set for the threshold value, target data accords with and predetermines the requirement promptly, just sends target data to high in the clouds server, is handled by the high in the clouds server, when the data bulk of target data does not surpass and sets for the threshold value, target data does not accord with and predetermines the requirement promptly, then directly is handled target data by earphone storage device. The storage space and the computing power of the earphone storage device are limited, and when the data volume of the target data is large, the target data is sent to the cloud server to be processed, so that the power consumption of the earphone storage device is reduced, and the processing speed is increased.
In this embodiment of the application, optionally, the earphone accommodating apparatus is connected to the internet through a mobile communication network or a wireless local area network. The earphone accommodating device has networking capability and can be connected with the Internet through a mobile communication network or a wireless local area network. For example, The headset storing device includes a 4G-LTE (The 4th Generation Mobile communication technology-Long Term Evolution, fourth Generation Mobile communication technology-Long Term Evolution) module, and can connect to a 4G network and transmit target data to a cloud server.
In this embodiment of the present application, optionally, the processing result includes audio data after noise reduction processing; in an implementation manner of processing the target data to obtain a processing result, the processing method may include: and carrying out noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
Specifically, the digital signal sampled in real time is subjected to spectrum analysis, so that the corresponding intensity and spectrum distribution of background noise can be analyzed, then a filter can be designed according to the model, when a person speaks, the signal analysis is simultaneously carried out, so that the spectrum of a speaker can be analyzed, and according to the background noise and the spectrum of the speaker, the filter passes the sound spectrum of the speaker according to the real-time change of the contrast of the two signals, the spectrum of the background noise is suppressed, the energy of the background noise is reduced, for example, the energy is reduced by 15 to 20 decibels, and the noise suppression effect can be obviously experienced.
In this embodiment of the present application, optionally, the processing result includes audio data after echo cancellation processing; in an implementation manner of processing the target data to obtain a processing result, the processing method may include: and carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
The basic principle of echo cancellation is based on the correlation between the loudspeaker signal and the multipath echo generated by it, and establishes the speech model of the far-end signal, and uses it to estimate the echo, and continuously modifies the coefficient of the filter, so that the estimated value is more approximate to the real echo. The echo estimate is then subtracted from the input signal of the microphone to cancel the echo.
Based on the correlation between the loudspeaker signal and the multipath echo generated by it, a speech model of the far-end signal is established, the echo is estimated by using it, and the coefficients of the filter are continuously modified, so that the estimated value is more approximate to the real echo. The echo estimate is then subtracted from the input signal to the microphone to cancel the echo, and the input to the microphone is compared to past values from the loudspeaker to cancel the delayed multiply reflected acoustic echo. Depending on the past speaker output values stored in the memory, various delayed echoes can be canceled.
In this embodiment of the present application, optionally, the target data includes audio data, and an implementation manner of processing the target data to obtain a processing result may include: carrying out voice processing on the audio data to obtain voice-processed audio data; and/or perform sound effect processing on the audio data to obtain the audio data after the sound effect processing, and the specific implementation manner may refer to the description in the foregoing embodiments, which is not described herein again.
In this embodiment of the present application, optionally, the target data includes audio data, and the processing result includes a sound recording file. For a specific implementation manner of generating the audio record file, reference may be made to the description in the foregoing embodiment, which is not described herein again.
In this embodiment of the present application, optionally, in an implementation manner of processing the target data to obtain a processing result, the processing method may include: and marking target audio in the audio data. For a specific implementation, reference may be made to the description in the foregoing embodiments, which is not repeated herein.
In this embodiment of the present application, optionally, the target data includes audio data, and the processing result includes text data; in an implementation manner of processing the target data to obtain a processing result, the processing method may include: the specific implementation manner of converting the audio data into the text data may refer to the description in the foregoing embodiments, which is not described herein again.
In this embodiment of the present application, optionally, in an implementation manner of processing the target data to obtain a processing result, the method may further include: identifying preset type target information in the text data; and generating memo information or reminding information according to the target information.
The preset types include types of time, place, person, event, and the like, or any other suitable types, which are not limited in this embodiment of the present application. And identifying target information of a preset type in the text data, and generating memo information or reminding information according to the target information. For example, the text data is "hold a concert in a bird nest at 11/7/2020", the time type target information is identified as "11/7/2020", the location type target information is "bird nest", and the event type target information is "concert", and then a piece of reminding information is generated according to the target information to remind the user at that time.
In this embodiment of the present application, optionally, the target data includes text data, and an implementation manner of processing the target data to obtain a processing result may include: the speech synthesis data is generated according to the text data, and the specific implementation manner may refer to the description in the foregoing embodiments, which is not described herein again.
In this embodiment of the present application, optionally, the target data includes audio data, and an implementation manner of processing the target data to obtain a processing result may include: according to the audio data, target audio data or target text data of the target language is obtained through translation, and specific implementation manners may refer to descriptions in the foregoing embodiments, which are not described herein again.
In this embodiment of the application, optionally, the target data includes temperature data and/or heart rate data, the processing result includes body state information, and processing the target data to obtain an implementation manner of the processing result may include: the body state information is generated according to the temperature data and/or the heart rate data, and specific implementation manners may refer to descriptions in the foregoing embodiments, which are not described herein again.
In this embodiment of the present application, optionally, the target data includes acceleration data, and an implementation manner of processing the target data to obtain a processing result may include: the action state of the user is generated according to the acceleration data, and a specific implementation manner may refer to the description in the foregoing embodiment, which is not described herein again.
In the embodiment of the present application, optionally, the method may further include: determining a target process associated with the motion state; the target process is executed. The motion state is associated with a target treatment, and upon identifying the motion state, the headset determines the target treatment associated with the motion state. The target processing includes answering or rejecting a call, marking a target audio in the call, turning up or down a volume, turning on or off a voice processing function, performing voice prompt on the user, or any other suitable processing, which is not limited in this embodiment of the present application.
The call includes, but is not limited to, a telephone call, an audio call or a video call in instant messaging software, and the like. The target audio in the call includes the audio of the own party or the other party. The voice processing includes a processing procedure of recognizing and understanding the audio and making corresponding feedback, including converting the audio into corresponding text or command, recognizing voice information in the audio, and making corresponding feedback according to understanding, or any other suitable voice processing, which is not limited in this embodiment of the application. Such as voice assistant functionality. The voice prompts include, but are not limited to, voice prompts to provide a sedentary reminder, a long-time heads-down reminder, etc. to the user.
In this embodiment of the present application, optionally, the target data includes acceleration data and position data, and an implementation manner of performing target processing on the target data to obtain a processing result may include: and generating navigation prompt information according to the acceleration data and the position data. The headset storing device may also acquire position data, for example, current position data using a GPS (Global Positioning System) chip built in the headset storing device. According to the acceleration data and the position data, navigation prompt information may be generated, where the navigation prompt information includes a navigation path, prompt information of a next action that is required to be performed by the user along the navigation at the current position, and the like, or any other suitable navigation prompt information, and the embodiment of the present application does not limit this. For example, when a user walks to a certain destination, the earphone and the earphone containing device receive the request of the user, generate a navigation path from the current position of the user to the destination, and then perform voice navigation prompt on the user.
In this embodiment of the present application, optionally, the target data includes audio data, and performing target processing on the target data to obtain a processing result includes: based on the voice processing function, the voice processing is performed on the audio data to obtain a voice processing result, and specific implementation manners may refer to the descriptions in the foregoing embodiments, which are not described herein again.
Step 303, sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
In this application embodiment, after the earphone storage device obtains the processing result, the processing result may be sent to the earphone, and/or the processing result may be stored, and/or the processing result may be displayed, and/or the processing result may be played, and/or the processing result may be sent to the cloud server, and the earphone storage device may perform one or more of the above operations on the processing result.
In summary, according to the embodiment of the present application, the target data is obtained through the earphone storage device, the target data is processed to obtain the processing result, and the processing result is sent to the earphone, and/or the processing result is stored, and/or the processing result is displayed, and/or the processing result is played, and/or the processing result is sent to the cloud server, so that the earphone storage device with higher computing capability can process the target data, the earphone can utilize the computing, storing, and/or displaying, and/or playing capabilities of the earphone storage device, and the problem that the function with higher computing capability requirement cannot be realized by the earphone is solved.
Referring to fig. 4, a flowchart illustrating steps of another embodiment of a data processing method according to the present application is shown, and the method is applied to an earphone storage device, and may specifically include the following steps:
step 401, collecting the audio data through a microphone array disposed on the earphone receiving device.
In the embodiment of the present application, the earphone storage device has a local data processing capability, and if a microphone array is further disposed on the earphone storage device, the earphone storage device can be used as an independent sound pickup apparatus. In some application scenes, the earphone storage device can collect audio data through a microphone array, and compared with the method for collecting audio data through microphones on an earphone, the microphone array is composed of a group of microphones arranged according to a certain geometric structure (a common line shape and a ring shape), and performs space-time processing on collected sound signals in different spatial directions, so that functions of noise suppression, reverberation removal, human voice interference suppression, sound source direction finding, sound source tracking, array gain and the like are realized, and further the processing quality of a voice signal is improved, and the voice recognition rate in a real environment is improved.
And 402, processing the audio data to obtain a processing result.
In the embodiment of the present application, a specific processing manner of the audio data may refer to the description in the foregoing embodiment, and is not described herein again.
In this embodiment of the present application, optionally, the processing result includes prompt information, and an implementation manner of processing the target data to obtain the processing result may include: detecting the source direction and/or type of the environmental sound according to the audio data; and generating prompt information according to the source direction and/or the type of the environmental sound.
The multiple audio acquisition devices of the microphone array may acquire multiple paths of audio data, and the direction of the source of the sound may be determined based on a controllable beam forming method of maximum output power, or may be determined based on a high-resolution spectrum estimation method, or may be determined based on a sound wave arrival time difference method, or may be determined by combining the above methods, which is not limited in this application. The audio data may be matched to several known types of sounds to determine the type of the ambient sound, for example, the collected ambient sound is a roar of an automobile.
Then, according to the source direction and/or type of the environmental sound, a prompt message is generated, and the prompt message can be used for prompting the source direction and/or type of the environmental sound, for example, if the environmental sound is determined to be the roaring sound of a front automobile, then the prompt message is generated: "attention! The front side has a vehicle! The prompt information is sent to the earphone as a processing result and is played to the user by the earphone, so that the danger of the user is prompted.
Step 403, sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
In summary, according to the embodiment of the present application, the earphone storage device collects the audio data through the microphone array disposed on the earphone storage device, processes the audio data to obtain a processing result, and sends the processing result to the earphone, and/or stores the processing result, and/or displays the processing result, and/or plays the processing result, and/or sends the processing result to the cloud server, so that the earphone storage device with the microphone array can better collect surrounding sound than the earphone, and/or sends the processing result to the cloud server, so that the earphone storage device with stronger computing capability can process target data, and the earphone can utilize the computing and storing, and/or displaying of the earphone storage device, And/or playing and the like, and the problem that functions with higher requirements on computing power cannot be realized by depending on earphones is solved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of motion combinations, but those skilled in the art should understand that the embodiment of the present application is not limited by the described sequence of motion actions, because some steps may be performed in other sequences or simultaneously according to the embodiment of the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are preferred and that the motions described are not necessarily required for the embodiments of the present application.
Referring to fig. 5, a block diagram of an embodiment of a data processing apparatus according to the present application is shown, and applied to a headset, the data processing apparatus specifically includes:
a data acquisition module 501, configured to acquire target data;
a data sending module 502, configured to send the target data to an earphone storage device, so that the earphone storage device processes the target data to obtain a processing result, and send the processing result to the earphone, and/or store the processing result, and/or display the processing result, and/or play the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
In this embodiment of the present application, optionally, the apparatus further includes:
and the result receiving module is used for receiving the processing result sent by the earphone accommodating device after the target data is sent to the earphone accommodating device.
In this embodiment of the present application, optionally, the apparatus further includes:
and the result sending module is used for sending the processing result to the mobile terminal after receiving the processing result sent by the earphone accommodating device.
In this embodiment of the present application, optionally, the apparatus further includes:
and the result playing module is used for playing the processing result after receiving the processing result sent by the earphone accommodating device.
In this embodiment of the present application, optionally, the apparatus further includes:
a result storage module, configured to store the processing result on a storage medium on the headset after receiving the processing result sent by the headset storing device.
In this embodiment of the application, optionally, the target data or the processing result is transmitted between the headset and the headset receiving device through bluetooth.
In this embodiment of the present application, optionally, the data acquisition module includes at least one of the following:
when the target data comprises audio data, an audio acquisition sub-module is used for acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, an acceleration acquisition submodule is used for acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, the temperature acquisition submodule is used for acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate acquisition submodule is used for acquiring the heart rate data through a heart rate sensor arranged on the earphone.
In the embodiment of the present application, optionally, the processing result includes at least one of the following: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
In this embodiment of the application, optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information.
In this embodiment of the application, optionally, when the target data includes acceleration data, the processing result includes a motion state of the user.
In summary, according to the embodiments of the present application, the earphone is used to collect the target data, and send the target data to the earphone storage device, so that the earphone storage device processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, and/or displays the processing result, and/or plays the processing result, or sends the target data to the cloud server through the mobile communication chip built in the earphone, so that the cloud server processes the target data to obtain the processing result, and sends the processing result to the earphone, and/or stores the processing result, so that the earphone storage device or the cloud server with stronger computing capability can process the target data, and the earphone can utilize the computing, storing, and/or displaying of the earphone storage device or the cloud server, And/or playing and the like, and the problem that functions with higher requirements on computing power cannot be realized by depending on earphones is solved.
Referring to fig. 6, a block diagram of an embodiment of a data processing apparatus according to the present application is shown, and is applied to an earphone accommodating apparatus, and specifically, the data processing apparatus may include:
a data obtaining module 601, configured to obtain target data;
a data processing module 602, configured to process the target data to obtain a processing result;
the result processing module 603 is configured to send the processing result to an earphone, and/or store the processing result, and/or display the processing result, and/or play the processing result, and/or send the processing result to a cloud server.
In this embodiment of the application, optionally, the data obtaining module includes:
and the data receiving submodule is used for receiving the target data from the earphone.
In this embodiment of the application, optionally, the target data includes audio data, and the data obtaining module includes:
and the data acquisition submodule is used for acquiring the audio data through a microphone array arranged on the earphone receiving device.
In this embodiment of the application, optionally, the data processing module includes:
the environment sound detection submodule is used for detecting the source direction and/or the type of the environment sound according to the audio data;
and the prompt generation submodule is used for generating prompt information according to the source direction and/or the type of the environment sound.
In this embodiment of the application, optionally, the data processing module includes:
and the noise reduction sub-module is used for performing noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
In this embodiment of the application, optionally, the data processing module includes:
and the echo cancellation submodule is used for carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
In this embodiment of the application, optionally, the target data includes audio data, and the data processing module includes:
the voice effect submodule is used for carrying out voice processing on the audio data to obtain voice processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
In this embodiment of the present application, optionally, the target data includes audio data, and the processing result includes a sound recording file.
In this embodiment of the application, optionally, the data processing module includes:
and the marking sub-module is used for marking the target audio in the audio data.
In this embodiment of the application, optionally, the target data includes audio data, and the data processing module includes:
and the text conversion submodule is used for converting the audio data into the text data.
In this embodiment of the application, optionally, the data processing module further includes:
the information identification submodule is used for identifying preset type target information in the text data;
and the memo reminding generation submodule is used for generating memo information or reminding information according to the target information.
In this embodiment of the application, optionally, the target data includes text data, and the data processing module includes:
and the voice generation submodule is used for generating voice synthesis data according to the text data.
In this embodiment of the application, optionally, the target data includes audio data, and the data processing module includes:
and the translation submodule is used for translating the audio data to obtain target audio data or target text data of the target language.
In this embodiment of the application, optionally, the target data includes temperature data and/or heart rate data, and the data processing module includes:
and the body state generating submodule is used for generating body state information according to the temperature data and/or the heart rate data.
In this embodiment of the application, optionally, the target data includes acceleration data, and the data processing module includes:
and the motion state identification submodule is used for identifying the motion state of the user according to the acceleration data.
In this embodiment of the present application, optionally, the apparatus further includes:
a process determination module for determining a target process associated with the motion state;
and the processing execution module is used for executing the target processing.
In this embodiment of the application, optionally, the target data includes acceleration data and position data, and the data processing module includes:
and the navigation generation submodule is used for generating navigation prompt information according to the acceleration data and the position data.
In this embodiment of the application, optionally, the target data includes audio data, and the data processing module includes:
and the voice processing submodule is used for carrying out voice processing on the audio data based on the voice processing function to obtain a voice processing result.
In this embodiment of the application, optionally, the data processing module includes:
the data sending submodule is used for sending the target data to a cloud server;
and the result receiving submodule is used for receiving a processing result obtained by processing the target data by the cloud server.
In this embodiment of the present application, optionally, the apparatus further includes:
the requirement detection module is used for detecting whether the target data meets preset requirements or user settings;
the cloud execution module is used for executing the step of sending the target data to a cloud server if the target data meets the preset requirements or user settings;
and the storage device execution module is used for processing the target data by the earphone storage device if the target data does not meet the preset requirements or user settings.
In this embodiment of the application, optionally, the requirement detection module is specifically configured to:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
In this embodiment of the application, optionally, the earphone receiving device is connected to the internet through a mobile communication network or a wireless local area network.
In summary, according to the embodiment of the present application, the target data is obtained through the earphone storage device, the target data is processed to obtain the processing result, and the processing result is sent to the earphone, and/or the processing result is stored, and/or the processing result is displayed, and/or the processing result is played, and/or the processing result is sent to the cloud server, so that the earphone storage device with higher computing capability can process the target data, the earphone can utilize the computing, storing, and/or displaying, and/or playing capabilities of the earphone storage device, and the problem that the function with higher computing capability requirement cannot be realized by the earphone is solved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 7 is a block diagram illustrating an apparatus 700 for data processing in accordance with an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 can include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operation at the device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of apparatus 700, sensor assembly 714 may also detect a change in position of apparatus 700 or a component of apparatus 700, the presence or absence of user contact with apparatus 700, orientation or acceleration/deceleration of apparatus 700, and a change in temperature of apparatus 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication section 716 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a schematic diagram of a server in some embodiments of the invention. The server 1900, which may vary widely in configuration or performance, may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform a data processing method applied to a headset, the method comprising:
collecting target data;
sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
Optionally, after the sending the target data to a headset storage device, the method further includes:
and receiving the processing result sent by the earphone accommodating device.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
and sending the processing result to the mobile terminal.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
and playing the processing result.
Optionally, after the receiving the processing result sent by the headset storing device, the method further includes:
storing the processing result on a storage medium on the headset.
Optionally, the target data or the processing result is transmitted between the headset and the headset storage device through bluetooth.
Optionally, the acquiring target data comprises at least one of:
when the target data comprises audio data, acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate data are collected through a heart rate sensor arranged on the earphone.
Optionally, the processing result includes at least one of: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
Optionally, when the target data includes temperature data and/or heart rate data, the processing result includes body state information.
Optionally, when the target data includes acceleration data, the processing result includes a motion state of the user.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform a data processing method for a headset receiving apparatus, the method comprising:
acquiring target data;
processing the target data to obtain a processing result;
sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
Optionally, the acquiring target data includes:
receiving the target data from the headset.
Optionally, the target data includes audio data, and the acquiring the target data includes:
the audio data is collected through a microphone array arranged on the earphone receiving device.
Optionally, the processing the target data to obtain a processing result includes:
detecting the source direction and/or type of the environmental sound according to the audio data;
and generating prompt information according to the source direction and/or the type of the environmental sound.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
Optionally, the processing the target data to obtain a processing result includes:
and carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
carrying out voice processing on the audio data to obtain voice-processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
Optionally, the target data includes audio data, and the processing result includes a sound recording file.
Optionally, the processing the target data to obtain a processing result includes:
and marking target audio in the audio data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
converting the audio data into the text data.
Optionally, the processing the target data to obtain a processing result further includes:
identifying preset type target information in the text data;
and generating memo information or reminding information according to the target information.
Optionally, the target data includes text data, and the processing the target data to obtain a processing result includes:
and generating voice synthesis data according to the text data.
Optionally, the target data includes audio data, and the processing the target data to obtain a processing result includes:
and translating to obtain target audio data or target text data of the target language according to the audio data.
Optionally, the target data includes temperature data and/or heart rate data, and the processing the target data to obtain a processing result includes:
and generating body state information according to the temperature data and/or the heart rate data.
Optionally, the target data includes acceleration data, and the processing the target data to obtain a processing result includes:
and identifying the motion state of the user according to the acceleration data.
Optionally, the method further comprises:
determining a target process associated with the motion state;
the target process is executed.
Optionally, the target data includes acceleration data and position data, and the performing target processing on the target data to obtain a processing result includes:
and generating navigation prompt information according to the acceleration data and the position data.
Optionally, the target data includes audio data, and the performing target processing on the target data to obtain a processing result includes:
and performing voice processing on the audio data based on a voice processing function to obtain a voice processing result.
Optionally, the performing target processing on the target data to obtain a processing result includes:
sending the target data to a cloud server;
and receiving a processing result obtained by processing the target data by the cloud server.
Optionally, the method further comprises:
detecting whether the target data meets preset requirements or user settings;
if the target data meets the preset requirements or user settings, executing the step of sending the target data to a cloud server;
and if the target data do not meet the preset requirements or the user setting, processing the target data by the earphone accommodating device.
Optionally, the detecting whether the target data meets a preset requirement includes:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
Optionally, the headset storage device is connected to the internet through a mobile communication network or a wireless local area network.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing detailed description has provided a data processing method, a data processing apparatus, and a machine-readable medium, which are provided by the present application, and specific examples are applied herein to explain the principles and embodiments of the present application, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (37)

1. A data processing method is applied to earphones, and comprises the following steps:
collecting target data;
sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
2. The method of claim 1, wherein after the sending the target data to a headset receiving device, the method further comprises:
and receiving the processing result sent by the earphone accommodating device.
3. The method according to claim 2, wherein after the receiving the processing result transmitted by the headset storing device, the method further comprises:
and sending the processing result to the mobile terminal.
4. The method according to claim 2, wherein after the receiving the processing result transmitted by the headset storing device, the method further comprises:
and playing the processing result.
5. The method according to claim 2, wherein after the receiving the processing result transmitted by the headset storing device, the method further comprises:
storing the processing result on a storage medium on the headset.
6. The method of claim 1, wherein the target data or processing results are transmitted between the headset and the headset receiving device via bluetooth.
7. The method of any one of claims 1-6, wherein the acquiring target data comprises at least one of:
when the target data comprises audio data, acquiring the audio data through a microphone arranged on the earphone;
when the target data comprises acceleration data, acquiring the acceleration data through an acceleration sensor arranged on the earphone;
when the target data comprises temperature data, acquiring the temperature data through a temperature sensor arranged on the earphone;
when the target data comprise heart rate data, the heart rate data are collected through a heart rate sensor arranged on the earphone.
8. The method of claim 7, wherein the processing result comprises at least one of: the voice processing method comprises the steps of obtaining voice processing results based on voice processing functions, and obtaining target voice data or target text data of a target language according to translation of voice data and target voice data of the target language by marking of target voice in the voice data according to text data, memo information and reminding information converted from the voice data.
9. The method of claim 7, wherein the processing result comprises body state information when the target data comprises temperature data and/or heart rate data.
10. The method of claim 7, wherein the processing result comprises a motion state of the user when the target data comprises acceleration data.
11. A data processing method is applied to an earphone accommodating device and comprises the following steps:
acquiring target data;
processing the target data to obtain a processing result;
sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
12. The method of claim 11, wherein the obtaining target data comprises:
receiving the target data from the headset.
13. The method of claim 11, wherein the target data comprises audio data, and wherein obtaining target data comprises:
the audio data is collected through a microphone array arranged on the earphone receiving device.
14. The method of claim 13, wherein the processing the target data to obtain a processing result comprises:
detecting the source direction and/or type of the environmental sound according to the audio data;
and generating prompt information according to the source direction and/or the type of the environmental sound.
15. The method of claim 11, wherein the processing the target data to obtain a processing result comprises:
and carrying out noise reduction processing on the audio data to obtain the audio data after the noise reduction processing.
16. The method of claim 11, wherein the processing the target data to obtain a processing result comprises:
and carrying out echo cancellation processing on the audio data to obtain the audio data after the echo cancellation processing.
17. The method of claim 11, wherein the target data comprises audio data, and wherein processing the target data to obtain a processing result comprises:
carrying out voice processing on the audio data to obtain voice-processed audio data; and/or carrying out sound effect processing on the audio data to obtain the audio data after the sound effect processing.
18. The method of claim 11, wherein the target data comprises audio data and the processing result comprises a sound recording file.
19. The method of claim 18, wherein the processing the target data to obtain a processing result comprises:
and marking target audio in the audio data.
20. The method of claim 11, wherein the target data comprises audio data, and wherein processing the target data to obtain a processing result comprises:
converting the audio data into the text data.
21. The method of claim 20, wherein said processing said target data to obtain a processing result further comprises:
identifying preset type target information in the text data;
and generating memo information or reminding information according to the target information.
22. The method of claim 11, wherein the target data comprises text data, and wherein processing the target data to obtain a processing result comprises:
and generating voice synthesis data according to the text data.
23. The method of claim 11, wherein the target data comprises audio data, and wherein processing the target data to obtain a processing result comprises:
and translating to obtain target audio data or target text data of the target language according to the audio data.
24. The method of claim 11, wherein the target data comprises temperature data and/or heart rate data, and wherein processing the target data to obtain a processed result comprises:
and generating body state information according to the temperature data and/or the heart rate data.
25. The method of claim 11, wherein the target data comprises acceleration data, and wherein processing the target data to obtain a processing result comprises:
and identifying the motion state of the user according to the acceleration data.
26. The method of claim 25, further comprising:
determining a target process associated with the motion state;
the target process is executed.
27. The method of claim 11, wherein the target data comprises acceleration data and position data, and the target processing the target data to obtain a processing result comprises:
and generating navigation prompt information according to the acceleration data and the position data.
28. The method of claim 11, wherein the target data comprises audio data, and the target processing the target data to obtain a processing result comprises:
and performing voice processing on the audio data based on a voice processing function to obtain a voice processing result.
29. The method according to any one of claims 11-28, wherein said performing target processing on said target data to obtain a processing result comprises:
sending the target data to a cloud server;
and receiving a processing result obtained by processing the target data by the cloud server.
30. The method of claim 29, further comprising:
detecting whether the target data meets preset requirements or user settings;
if the target data meets the preset requirements or user settings, executing the step of sending the target data to a cloud server;
and if the target data do not meet the preset requirements or the user setting, processing the target data by the earphone accommodating device.
31. The method of claim 30, wherein the detecting whether the target data meets a preset requirement comprises:
if the data volume of the target data exceeds a set threshold, determining that the target data meets a preset requirement;
and if the data volume of the target data does not exceed a set threshold, determining that the target data does not meet preset requirements.
32. The method of claim 29, wherein the headset receiving device is connected to the internet via a mobile communication network or a wireless local area network.
33. A data processing apparatus, applied to a headset, comprising:
the data acquisition module is used for acquiring target data;
the data sending module is used for sending the target data to an earphone accommodating device so that the earphone accommodating device can process the target data to obtain a processing result, and sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
34. A data processing device, which is applied to an earphone storage device, comprises:
the data acquisition module is used for acquiring target data;
the data processing module is used for processing the target data to obtain a processing result;
and the result processing module is used for sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
35. An apparatus for data processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
collecting target data;
sending the target data to an earphone accommodating device for the earphone accommodating device to process the target data to obtain a processing result, sending the processing result to the earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result; or sending the target data to a cloud server through a mobile communication chip arranged in the earphone, so that the cloud server processes the target data to obtain a processing result, and sending the processing result to the earphone and/or storing the processing result.
36. An apparatus for data processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
acquiring target data;
processing the target data to obtain a processing result;
sending the processing result to an earphone, and/or storing the processing result, and/or displaying the processing result, and/or playing the processing result, and/or sending the processing result to a cloud server.
37. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a data processing method as claimed in one or more of claims 1 to 32.
CN202010508183.3A 2020-06-05 2020-06-05 Data processing method and device and readable medium Pending CN111741394A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010508183.3A CN111741394A (en) 2020-06-05 2020-06-05 Data processing method and device and readable medium
PCT/CN2021/074911 WO2021244056A1 (en) 2020-06-05 2021-02-02 Data processing method and apparatus, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508183.3A CN111741394A (en) 2020-06-05 2020-06-05 Data processing method and device and readable medium

Publications (1)

Publication Number Publication Date
CN111741394A true CN111741394A (en) 2020-10-02

Family

ID=72648414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508183.3A Pending CN111741394A (en) 2020-06-05 2020-06-05 Data processing method and device and readable medium

Country Status (2)

Country Link
CN (1) CN111741394A (en)
WO (1) WO2021244056A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112331179A (en) * 2020-11-11 2021-02-05 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN112506331A (en) * 2020-12-11 2021-03-16 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN113345440A (en) * 2021-06-08 2021-09-03 北京有竹居网络技术有限公司 Signal processing method, device and equipment and Augmented Reality (AR) system
WO2021244056A1 (en) * 2020-06-05 2021-12-09 北京搜狗智能科技有限公司 Data processing method and apparatus, and readable medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333201A (en) * 2017-07-24 2017-11-07 歌尔科技有限公司 One kind translation earphone storage box, wireless translation earphone and wireless translation system
CN107948789A (en) * 2017-11-30 2018-04-20 会听声学科技(北京)有限公司 Active noise reduction headset designs system and method based on cloud service
CN108509428A (en) * 2018-02-26 2018-09-07 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108550367A (en) * 2018-05-18 2018-09-18 深圳傲智天下信息科技有限公司 A kind of portable intelligent interactive voice control device, method and system
CN109067965A (en) * 2018-06-15 2018-12-21 Oppo广东移动通信有限公司 Interpretation method, translating equipment, wearable device and storage medium
CN109509469A (en) * 2018-11-29 2019-03-22 与德科技有限公司 Voice control body temperature detection method, device, system and storage medium
CN109543198A (en) * 2018-11-29 2019-03-29 与德科技有限公司 Interpretation method, device, system and storage medium
CN109567779A (en) * 2018-11-29 2019-04-05 与德科技有限公司 Heart rate detection method, system and storage medium
CN208940167U (en) * 2018-09-04 2019-06-04 杭州骇音科技有限公司 Bluetooth headset with voice arousal function
CN110147557A (en) * 2019-05-23 2019-08-20 歌尔科技有限公司 The charging box and storage medium of a kind of interpretation method, system and wireless headset
EP3621067A1 (en) * 2018-05-18 2020-03-11 Shenzhen Aukey Smart Information Technology Co., Ltd. Ai voice interaction method, apparatus and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109246517B (en) * 2018-10-12 2021-03-12 歌尔科技有限公司 Noise reduction microphone correction method of wireless earphone, wireless earphone and charging box
CN109938711A (en) * 2019-04-23 2019-06-28 深圳傲智天下信息科技有限公司 Health monitor method, system and computer readable storage medium
CN111031440A (en) * 2019-12-27 2020-04-17 深圳春沐源控股有限公司 Earphone assembly
CN111741394A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Data processing method and device and readable medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333201A (en) * 2017-07-24 2017-11-07 歌尔科技有限公司 One kind translation earphone storage box, wireless translation earphone and wireless translation system
CN107948789A (en) * 2017-11-30 2018-04-20 会听声学科技(北京)有限公司 Active noise reduction headset designs system and method based on cloud service
CN108509428A (en) * 2018-02-26 2018-09-07 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108550367A (en) * 2018-05-18 2018-09-18 深圳傲智天下信息科技有限公司 A kind of portable intelligent interactive voice control device, method and system
EP3621067A1 (en) * 2018-05-18 2020-03-11 Shenzhen Aukey Smart Information Technology Co., Ltd. Ai voice interaction method, apparatus and system
CN109067965A (en) * 2018-06-15 2018-12-21 Oppo广东移动通信有限公司 Interpretation method, translating equipment, wearable device and storage medium
CN208940167U (en) * 2018-09-04 2019-06-04 杭州骇音科技有限公司 Bluetooth headset with voice arousal function
CN109509469A (en) * 2018-11-29 2019-03-22 与德科技有限公司 Voice control body temperature detection method, device, system and storage medium
CN109543198A (en) * 2018-11-29 2019-03-29 与德科技有限公司 Interpretation method, device, system and storage medium
CN109567779A (en) * 2018-11-29 2019-04-05 与德科技有限公司 Heart rate detection method, system and storage medium
CN110147557A (en) * 2019-05-23 2019-08-20 歌尔科技有限公司 The charging box and storage medium of a kind of interpretation method, system and wireless headset

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244056A1 (en) * 2020-06-05 2021-12-09 北京搜狗智能科技有限公司 Data processing method and apparatus, and readable medium
CN112331179A (en) * 2020-11-11 2021-02-05 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN112506331A (en) * 2020-12-11 2021-03-16 北京搜狗科技发展有限公司 Data processing method and earphone accommodating device
CN113345440A (en) * 2021-06-08 2021-09-03 北京有竹居网络技术有限公司 Signal processing method, device and equipment and Augmented Reality (AR) system

Also Published As

Publication number Publication date
WO2021244056A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN111741394A (en) Data processing method and device and readable medium
CN108762494B (en) Method, device and storage medium for displaying information
CN107749925B (en) Audio playing method and device
CN113873379B (en) Mode control method and device and terminal equipment
EP4002878A1 (en) Method and apparatus for playing audio data, electronic device, and storage medium
CN109360549B (en) Data processing method, wearable device and device for data processing
CN104991754A (en) Recording method and apparatus
US20180054688A1 (en) Personal Audio Lifestyle Analytics and Behavior Modification Feedback
CN115482830B (en) Voice enhancement method and related equipment
CN111696553A (en) Voice processing method and device and readable medium
CN110431549A (en) Information processing unit, information processing method and program
WO2022253003A1 (en) Speech enhancement method and related device
CN109256145B (en) Terminal-based audio processing method and device, terminal and readable storage medium
WO2022267468A1 (en) Sound processing method and apparatus thereof
CN114898736A (en) Voice signal recognition method and device, electronic equipment and storage medium
CN111724783B (en) Method and device for waking up intelligent device, intelligent device and medium
CN113506582A (en) Sound signal identification method, device and system
CN105244037B (en) Audio signal processing method and device
CN110660403B (en) Audio data processing method, device, equipment and readable storage medium
CN110580910B (en) Audio processing method, device, equipment and readable storage medium
CN112866480B (en) Information processing method, information processing device, electronic equipment and storage medium
CN111696566B (en) Voice processing method, device and medium
CN114863916A (en) Speech recognition model training method, speech recognition device and storage medium
CN113488066A (en) Audio signal processing method, audio signal processing apparatus, and storage medium
CN109712629B (en) Audio file synthesis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210706

Address after: 100084 Room 802, 8th floor, building 9, yard 1, Zhongguancun East Road, Haidian District, Beijing

Applicant after: Beijing Sogou Intelligent Technology Co.,Ltd.

Address before: 100084. Room 9, floor 01, cyber building, building 9, building 1, Zhongguancun East Road, Haidian District, Beijing

Applicant before: BEIJING SOGOU TECHNOLOGY DEVELOPMENT Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002