WO2023001054A1 - Procédé et appareil de configuration de priorité pour lecture audio, et dispositif et support de stockage - Google Patents

Procédé et appareil de configuration de priorité pour lecture audio, et dispositif et support de stockage Download PDF

Info

Publication number
WO2023001054A1
WO2023001054A1 PCT/CN2022/105703 CN2022105703W WO2023001054A1 WO 2023001054 A1 WO2023001054 A1 WO 2023001054A1 CN 2022105703 W CN2022105703 W CN 2022105703W WO 2023001054 A1 WO2023001054 A1 WO 2023001054A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
priority
audio stream
configuration information
target
Prior art date
Application number
PCT/CN2022/105703
Other languages
English (en)
Chinese (zh)
Inventor
许超杰
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023001054A1 publication Critical patent/WO2023001054A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt

Definitions

  • the embodiments of the present application relate to the technical field of media playing, and in particular to a priority configuration method, device, device and storage medium for audio playing.
  • a pair of headphones is capable of establishing wireless connections between multiple devices such as mobile phones, tablets, laptops, and more.
  • the earphone can decide which device to play the audio stream according to the pre-stored priority configuration information.
  • the headset Since the headset does not have a display screen, the user cannot directly set the above priority configuration information on the headset.
  • the user sets the priority configuration information on the device (such as a mobile phone) connected to the headset, and then sends the priority configuration information to the headset through the device, and the headset stores the priority configuration information.
  • Embodiments of the present application provide a priority configuration method, device, device and storage medium for audio playback. Described technical scheme is as follows:
  • a method for configuring priority for audio playback is provided, the method is performed by an audio playback device, and the method includes:
  • the user behavior data includes data for controlling the audio playback device to switch and play different audio streams
  • the priority configuration information corresponding to the target scene mode is updated, and the priority configuration information is used to configure priority relationships among multiple audio streams.
  • a priority configuration device for audio playback comprising:
  • a mode determination module configured to determine the target scene mode where the audio playback device is located
  • a data acquisition module configured to acquire user behavior data generated in the target scene mode, where the user behavior data includes data for controlling the audio playback device to switch and play different audio streams;
  • a configuration updating module configured to update priority configuration information corresponding to the target scene mode based on the user behavior data, where the priority configuration information is used to configure priority relationships among various audio streams.
  • an audio playback device includes a processor and a memory, a computer program is stored in the memory, and the computer program is executed by the processor to realize The above-mentioned priority configuration method for audio playback.
  • a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and the computer program is executed by a processor, so as to implement the above-mentioned priority configuration method for audio playback.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the audio playback device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the audio playback device executes the above-mentioned priority configuration method for audio playback.
  • Fig. 1 is a schematic diagram of a scheme implementation environment provided by an embodiment of the present application.
  • Fig. 2 is a schematic diagram of time slice division in LE Audio (Low Energy Audio, low power consumption audio) provided by an embodiment of the present application;
  • Fig. 3 is the schematic diagram of the state transition of the ASE (Audio Stream Endpoint, audio data stream end point) state machine provided by one embodiment of the present application;
  • Fig. 4 is a flowchart of a priority configuration method for audio playback provided by an embodiment of the present application
  • FIG. 5 is a flowchart of an audio playback control method provided by an embodiment of the present application.
  • FIG. 6 is a flow chart of the connection establishment phase provided by an embodiment of the present application.
  • Fig. 7 is a flow chart of the initialization stage of the ASE state machine provided by one embodiment of the present application.
  • FIG. 8 is a flow chart of the audio stream playing and switching stages provided by an embodiment of the present application.
  • Fig. 9 is a block diagram of a priority configuration device for audio playback provided by an embodiment of the present application.
  • FIG. 10 is a block diagram of a device for configuring priorities for audio playback provided by another embodiment of the present application.
  • Fig. 11 is a structural block diagram of an audio playback device provided by an embodiment of the present application.
  • FIG. 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the solution implementation environment may include an audio playback device 10 and multiple connection devices 20 .
  • the audio playing device 10 refers to an electronic device capable of playing audio.
  • the audio playback device 10 may be electronic devices such as earphones and speakers.
  • the audio playback device 10 may be a TWS (True Wireless Stereo, true wireless stereo) earphone.
  • the connection device 20 refers to an electronic device that communicates with the audio playback device 10 .
  • the connection device 20 may be an electronic device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a multimedia playback device, or a wearable device.
  • the audio playback device 10 can establish communication connections with multiple connection devices 20 .
  • the audio playback device 10 is an earphone, and the earphone establishes a communication connection with multiple connection devices 20 such as a mobile phone, a tablet computer, and a notebook computer.
  • the connection device 20 may send an audio stream to the audio playback device 10 through the communication connection, and then the audio playback device 10 decodes and plays the audio stream.
  • the communication connection between the audio playback device 10 and the connection device 20 is a wireless communication connection.
  • the communication connection between the audio playback device 10 and the connection device 20 is a Bluetooth connection.
  • the audio playback device 10 and the connection device 20 communicate through Bluetooth LE Audio technology (hereinafter referred to as LE Audio).
  • LE Audio is a Bluetooth audio technology standard.
  • CIG Connected Isochronous Group, Connection Synchronization Group
  • CIS Connected Isochronous Stream, Connection Synchronization Stream
  • LE Audio a transmission channel and transmission strategy for time-dependent data are defined. That is, for the single-send-multiple-receive scenario, the protocol determines how the sender and receiver allocate sending and receiving time slices, so as to ensure that all receivers can meet the transmission requirements under certain synchronization requirements. As shown in FIG.
  • each Event can be regarded as a CIG
  • each Subevent can be regarded as a CIS
  • an Event can include multiple Subevents.
  • the Bluetooth master device will send data to the Bluetooth slave device, that is, M to S in Figure 2; the Bluetooth slave device will also reply to the Bluetooth master device, that is, S to M in Figure 2. In this way, a Bluetooth master device can communicate with several Bluetooth slave devices at the same time.
  • Figure 3 is an ASE state machine for managing and controlling audio streams.
  • ASE an operation can only be delivered in a certain state.
  • the states shown in Figure 3 are described as follows:
  • connection release state the CIS connection has been destroyed
  • CIS is bidirectional
  • ASE is unidirectional
  • one ASE state machine is only used to manage and control data from one party to the other.
  • the above method of setting priority configuration information not only requires the assistance of other devices other than the earphone, but also needs manual setting by the user, which is complicated and cumbersome.
  • the audio playback device (such as earphones) can realize the setting of priority configuration information without the assistance of other devices.
  • the audio playback setting collects and records user behavior data
  • the user behavior data includes Control the audio playback device to switch and play the data of different audio streams, and then automatically update the priority configuration information based on the user behavior data, realizing the automatic setting and updating of the priority configuration information, without the assistance of other devices, and without manual settings by the user, fully Reduced complexity.
  • the corresponding priority configuration information is set respectively, so that when the audio playback device is in different scene modes, the audio stream can be played and controlled with the priority configuration information suitable for the current scene mode, so that Audio playback control is more accurate and better meets user needs.
  • FIG. 4 shows a flowchart of a method for configuring priorities for audio playback provided by an embodiment of the present application.
  • the method can be applied in the solution implementation environment shown in FIG. 1 , as the execution subject of each step can be the audio playback device 10, the method can include the following steps (410-430):
  • Step 410 determine the target scene mode of the audio playback device.
  • the audio playback device can be in different scene modes, which can be preset and stored in the audio playback device by technicians (such as the staff of the manufacturer of the audio playback device), or can be set and stored in the audio playback device by the user. in the device.
  • Different scenario modes correspond to different usage scenarios and also correspond to different priority requirements.
  • Table 1 scene configuration information is stored in the audio playback device, and the scene configuration information includes the corresponding relationship between the identifier of the scene mode and the name of the scene mode. Among them, the reserved scene modes can be added by users.
  • the audio playback device determines the target scene mode it is currently in, and the target scene mode may be any one of the scene modes configured in the scene configuration information.
  • the audio playback device uses one of the following methods to determine the target scene mode it is in:
  • Method 1 collect user voice data, and identify the user voice data to determine the target scene mode.
  • the user can control the audio playback device to enter a certain target scene mode by voice. For example, when the user says "enter home mode", the audio playback device collects the user's voice data generated by the user's speech, and then recognizes the user's voice data through voice recognition, semantic analysis and other technologies to determine the current target scene mode.
  • Method 2 collect environmental audio data, and identify the environmental audio data to determine the target scene mode.
  • Environmental audio data refers to audio data generated in the environment where the audio playback device is located.
  • at least one feature information corresponding to each scene mode is preset, and the feature information corresponding to the environmental audio data is obtained by identifying the environmental audio data.
  • the feature information corresponding to the environmental audio data corresponds to a certain scene mode
  • the feature information matches it is determined that the matched scene mode is the target scene mode.
  • the feature information corresponding to the family scene mode includes but is not limited to at least one of the following: the volume is greater than a first threshold, there are multiple speakers, and the voices of the speakers appear irregularly, and so on.
  • the specific content of the feature information corresponding to each scene mode is not limited, which can be reasonably set according to requirements.
  • Mode 3 In response to a user operation on the audio playback device, determine a target scene mode based on the user operation. The user can also control the audio playback device to enter a certain target scene mode by touching or tapping the audio playback device. For example, double-tap for home mode, triple-tap for meeting mode, and so on.
  • the above-mentioned control device may be a device such as a mobile phone, a tablet computer, a PC (Personal Computer, personal computer) connected to the audio playback device, or other devices such as a router, which is not limited in this application.
  • a device such as a mobile phone, a tablet computer, a PC (Personal Computer, personal computer) connected to the audio playback device, or other devices such as a router, which is not limited in this application.
  • the method for determining the target scene mode described above is only exemplary and explanatory, and the present application does not limit that the target scene mode may also be determined in other ways.
  • Step 420 acquiring user behavior data generated in the target scene mode, where the user behavior data includes data for controlling the audio playback device to switch and play different audio streams.
  • the user behavior data includes data for controlling a certain audio stream to preempt or not preempt another audio stream for playback.
  • the user behavior data includes data for controlling the audio stream belonging to the target audio stream type, preempting or not preempting other audio streams for playback.
  • the types of audio streams that can be played by the audio playback device may include two types: media and telephony.
  • audio stream types can also be extended to other types or audio streams can be divided from different classification granularities.
  • media can be further subdivided into audio and video calls, Various types such as music playback and video playback, which are not limited in this application.
  • the audio playback device is playing media and a call comes in at this time
  • the user chooses to answer it, record the user behavior data that the phone preempts the media for playback, and if the user chooses to reject the call, record the user behavior that the phone does not preempt the media for playback data.
  • the user behavior data includes data for controlling audio streams from the target connected device, preempting or not preempting other audio streams for playback.
  • the target connection device refers to one connection device among the plurality of connection devices connected to the audio playback device.
  • the audio playback device connects device 1 and device 2, the audio playback device is playing the audio stream from device 1, and the user controls the audio stream from device 2 to preempt the audio stream from device 1 for playback, then record the audio from device 2 The stream preempts the audio stream from device 1 for playback of user behavior data.
  • the user behavior data includes data for controlling the audio stream from the target connected device and belonging to the target audio stream type to preempt or not preempt other audio streams for playback.
  • the audio playback device is playing a phone call from device 1.
  • device 2 has a phone call. If the user chooses to answer the phone call of device 2, the user behavior data that the phone of device 2 preempts the phone of device 1 to play will be recorded. If If the user chooses to reject the phone call of device 2, the user behavior data that the phone of device 2 does not preempt the phone of device 1 to play is recorded.
  • the audio playback device is playing the media from device 1. At this time, device 1 has a phone call.
  • the phone of device 1 will record the user behavior data of playing the media of device 1, If the user chooses to reject the phone call of device 1, the user behavior data that the phone call of device 1 does not seize the media of device 1 for playback is recorded.
  • the audio playback device is playing the media from device 1.
  • device 2 has a phone call. If the user chooses to answer the phone call of device 2, the phone of device 2 will record the user behavior data of playing the media of device 1, If the user chooses to reject the phone call of device 2, the user behavior data that the phone call of device 2 does not seize the media of device 1 for playback is recorded.
  • Step 430 Based on the user behavior data, update the priority configuration information corresponding to the target scene mode, where the priority configuration information is used to configure the priority relationship among multiple audio streams.
  • the audio playback device After acquiring the user behavior data generated in the target scene mode, the audio playback device updates the priority configuration information corresponding to the target scene mode based on the user behavior data.
  • the priority configuration information is used to configure priority relationships among multiple audio stream types.
  • the priority configuration information includes: an identifier of the audio stream type, and a priority value corresponding to the identifier of the audio stream type.
  • audio stream type Identifier of the audio stream type priority value Telephone 0x01 0 media 0x02 0 reserve 0x00 0
  • the priority values corresponding to the identifiers of each audio stream type can be the same, for example, they are all 0; or, in the priority configuration information of initialization settings, the identifiers of each audio stream type correspond to
  • the priority value of can also be different, for example, set according to the default priority value.
  • the priority corresponding to the target audio stream type The level value is updated to increase the priority of the target audio stream type. Taking the larger priority value as an example, the higher the priority, the above update of the priority value corresponding to the target audio stream type may add 1 to the priority value corresponding to the target audio stream type. For example, if the user behavior data includes controlling the phone to preempt the media to play, add 1 to the corresponding priority value of the phone; if the user behavior data includes controlling the phone not to preempt the media to play, then keep the respective priority values of the phone and the media unchanged.
  • priority relationships between different audio stream types can be automatically set based on user behavior data.
  • the priority configuration information is used to configure priority relationships among audio streams of multiple connected devices.
  • the priority configuration information includes: an identifier of the connected device, and a priority value corresponding to the identifier of the connected device.
  • the priority values corresponding to the identifiers of each connected device can be the same, for example, they are all 0; or, in the priority configuration information of initialization settings, the priority values corresponding to the identifiers of each connected device
  • the priority value can also be different, for example, set according to the default priority value.
  • the priority value corresponding to the target connected device Update to elevate the priority of the target connected device. Taking a larger priority value as an example, a higher priority value is used as an example. In the foregoing update of the priority value corresponding to the target connection device, 1 may be added to the priority value corresponding to the target connection device.
  • the user behavior data includes controlling the audio stream from device 2 to preempt the audio stream from device 1 for playback, then add 1 to the corresponding priority value of device 2; the user behavior data includes controlling the audio stream from device 2 not to preempt the audio stream from device 1 1 audio stream, keep the corresponding priority value of each connected device unchanged.
  • priority relationships between audio streams of different connected devices can be automatically set based on user behavior data.
  • the priority configuration information is used to configure the priority relationship between multiple audio stream types, and is used to configure the priority relationship of the same or different audio stream types of different connected devices of the audio playback device.
  • the priority configuration information includes: an identifier of the connected device, an identifier of the audio stream type, and a priority value corresponding to the identifier of the connected device and the identifier of the audio stream type.
  • the priority configuration information includes: a combination identifier and a priority value corresponding to the combination identifier, wherein the combination identifier is used to distinguish between different connection devices and the combination of the audio stream type .
  • combination combination identifier priority value device 1 media 0x01 0 device 1 phone 0x02 0 device 2 media 0x03 0 device 2 phone 0x04 0 reserve 0x00 0
  • the target The priority value corresponding to the target audio stream type of the connected device is updated to increase the priority of the target audio stream type of the target connected device. Taking the larger priority value as an example, the higher the priority, the above update of the priority value corresponding to the target audio stream type of the target connected device may add 1 to the priority value corresponding to the target audio stream type of the target connected device.
  • the priority value corresponding to the phone of the device 2 is increased by 1, and other priority values remain unchanged.
  • the priority value corresponding to the phone of the device 1 is increased by 1, and other priority values remain unchanged.
  • the priority value corresponding to the phone of the device 2 is increased by 1, and other priority values remain unchanged.
  • the priority relationship between different audio stream types of the same connected device can be automatically set, and the priority relationship between the same/different audio stream types of different connected devices can be set automatically.
  • the audio playback device (such as a headset) can realize the setting of priority configuration information without the assistance of other devices.
  • the audio playback setting collects and records user behavior data
  • the user behavior data includes the data that controls the audio playback device to switch and play different audio streams, and then automatically updates the priority configuration information based on the user behavior data, realizing the automatic setting and updating of the priority configuration information without the assistance of other devices or Manual setting by the user fully reduces the complexity.
  • the corresponding priority configuration information is set respectively, so that when the audio playback device is in different scene modes, the audio stream can be played and controlled with the priority configuration information suitable for the current scene mode, so that Audio playback control is more accurate and better meets user needs.
  • the audio playback device implements audio playback control by performing the following steps:
  • Step 510 During the process of playing the first audio stream in the target scene mode, if a playback instruction for the second audio stream is received, determine the first audio stream and the second audio stream according to the priority configuration information corresponding to the target scene mode. Priority relationship between streams.
  • the audio playback device can determine the first audio stream type and the second audio stream type, the first audio stream type refers to the audio stream type to which the first audio stream belongs, and the second audio stream type refers to the audio stream type to which the second audio stream belongs Audio stream type: determine the priority between the first audio stream and the second audio stream according to the priority relationship between the first audio stream type and the second audio stream type defined in the priority configuration information corresponding to the target scene mode relation.
  • the first audio stream refers to an audio stream of a video service between an audio playback device (such as a headset) and a first connection device (such as a notebook computer), and the second audio stream refers to an audio stream between the audio playback device (such as a headset) and the first connection device (such as a notebook computer).
  • the audio stream of the telephone service between two connected devices such as a mobile phone
  • the first audio stream type is media
  • the second audio stream type is telephone, assuming that the priority of the telephone is determined to be higher than that of the media according to the priority configuration information priority, the priority of the second audio stream is higher than that of the first audio stream.
  • the priority configuration information corresponds to the target scene mode, it is used to configure a priority relationship between audio streams of multiple connected devices in the target scene mode.
  • the priority configuration information is shown in Table 3 above.
  • the audio playback device can determine the first connection device and the second connection device, the first connection device refers to the connection device to which the first audio stream belongs, and the second connection device refers to the connection device to which the second audio stream belongs; according to the target
  • the priority relationship between the first connected device and the second connected device defined in the priority configuration information corresponding to the scene mode determines the priority relationship between the first audio stream and the second audio stream.
  • the first audio stream refers to an audio stream of a video service between an audio playback device (such as a headset) and a first connection device (such as a notebook computer), and the second audio stream refers to an audio stream between the audio playback device (such as a headset) and the first connection device (such as a notebook computer).
  • the audio stream of the telephone service between two connected devices such as mobile phones
  • it is determined that the priority of the second connected device is higher than that of the first connected device the priority of the second audio stream is high Priority for the first audio stream.
  • the audio playback device can determine the first combination and the second combination, the first combination refers to the combination of the audio stream type to which the first audio stream belongs and the connection device, and the second combination refers to the audio stream type to which the second audio stream belongs A combination with the connected device; according to the priority relationship between the first combination and the second combination defined in the priority configuration information corresponding to the target scene mode, determine the priority relationship between the first audio stream and the second audio stream.
  • the first audio stream refers to an audio stream of a video service between an audio playback device (such as a headset) and a first connection device (such as a notebook computer), and the second audio stream refers to an audio stream between the audio playback device (such as a headset) and the first connection device (such as a notebook computer).
  • the first combination is media + the first connected device
  • the second combination is the phone + the second connected device.
  • the second combination is determined If the priority of the second combination is higher than that of the first combination, then the priority of the second audio stream is higher than that of the first audio stream.
  • Step 520 if the priority of the second audio stream is higher than that of the first audio stream, pause playing the first audio stream and play the second audio stream.
  • pausing the playback of the first audio stream includes: switching the first state machine from the transmitting state to the stop transmitting state, and the first state machine is a state machine corresponding to the first audio stream; disconnecting the audio playback device from the An audio stream transmission connection between first connected devices, where the first connected device is a device that transmits a first audio stream to an audio playback device.
  • the first state machine is the ASE state machine
  • the transmitting state refers to the state in which the audio stream data has started to be transmitted, such as the Streaming state in the ASE state machine
  • the stop transmission state refers to the state in which the audio stream transmission has stopped, such as the ASE state Disabled status of the machine.
  • the parameter configuration completion state refers to the state in which the two parties of the Bluetooth connection have configured CIG/CIS related parameters, such as the QoS Configured state in the ASE state machine.
  • playing the second audio stream includes: switching the second state machine from the parameter configuration completion state to the connection establishment state, where the second state machine is a state machine corresponding to the second audio stream;
  • the audio stream transmission connection between the two connected devices, the second connected device is a device that transmits the second audio stream to the audio playback device;
  • the second state machine is switched from the connection establishment state to the transmitting state; through the connection with the second connected device
  • the audio stream transmission connection between them receives the second audio stream from the second connected device; and plays the received second audio stream.
  • the second state machine can also be an ASE state machine
  • the parameter configuration completion state refers to the state in which the two parties of the Bluetooth connection have configured CIG/CIS related parameters, such as the QoS Configured state in the ASE state machine
  • the connection establishment state is Refers to the state in which the CIS connection and data path used to transmit the audio stream have been established, such as the enabling state in the ASE state machine
  • the transmitting state refers to the state in which the audio stream data has begun to be transmitted, such as the streaming state in the ASE state machine.
  • the audio playback device may also play prompt information in audio form, prompting the user to select the first audio stream or the second audio stream for playback.
  • the user feeds back the audio stream to be played in voice form according to the prompt information, and the audio playback device determines the audio stream to be played through voice collection and semantic recognition.
  • the audio playback device can determine the priority relationship between each audio stream based on the priority configuration information, so as to The playback and switching of audio streams are automatically controlled, so that audio can be played more intelligently in multi-connection scenarios.
  • the technical solution of the present application will be exemplarily introduced and described by taking the audio playback device as an earphone, the first connection device as a notebook computer, and the second connection device as a mobile phone as an example.
  • Step 601 the headset creates a default priority attribute table.
  • the headset creates a default priority attribute table (that is, the priority configuration information described above) according to the format of Table 2, assuming that the priority of the phone is the highest.
  • Step 602 establish a connection between the mobile phone and the headset through BLE (Bluetooth Low Energy, Bluetooth Low Energy).
  • BLE Bluetooth Low Energy, Bluetooth Low Energy
  • step 603 the earphone creates three ASE state machines for the mobile phone, and the initial states of the three ASE state machines are all in the Idle state.
  • each audio stream corresponds to an ASE state machine, which is used to manage and control the audio stream.
  • ASE state machine For mobile phones, there are requirements for making calls and playing media, and three ASE state machines need to be created. Among them, one ASE state machine is used to receive the phone audio stream of the mobile phone, marked as ASE0_0, the other ASE state machine is used to transmit the audio stream of the microphone to the mobile phone, marked as ASE0_1, and another ASE state machine is used for media playback, marked as for ASE0_2.
  • Step 604 the mobile phone and the earphone interact with the audio codec capability to configure the audio codec and decoder.
  • step 605 a connection is established between the notebook computer and the earphone through BLE.
  • Step 606 the earphone creates an ASE state machine for the notebook computer, and the initial state of the ASE state machine is the Idle state.
  • Step 607 the notebook computer and the earphone interact with the audio codec capability to configure the audio codec and decoder.
  • the headset For each ASE state machine created, the headset needs to initialize it. As shown in Figure 7, in the initialization phase of the ASE state machine, the following steps (701-706) are included:
  • Step 701 after the BLE is connected, the initial state of the ASE state machine is the Idle state.
  • the mobile phone determines the audio codec capabilities supported by both parties (the mobile phone and the headset).
  • Step 703 the mobile phone configures its own audio codec capability, and then sends an audio codec configuration command to the earphone.
  • Step 704 after the earphone receives the audio codec configuration instruction, configure its own audio codec capability, then switch the ASE state machine from the initial state to the codec configuration completion state (that is, switch from the Idle state to the Codec Configured state), and send to The mobile phone sends the CIS configuration parameters supported by the headset.
  • Step 705 After the mobile phone receives the CIS configuration parameters sent by the headset, it determines the CIS configuration parameters supported by both parties, then configures its own CIS, and sends a QoS configuration command to the headset, and the QoS configuration command is used to instruct the headset to configure its own CIS , the QoS configuration instruction includes the CIS parameters configured by the mobile phone.
  • Step 706 after the earphone receives the QoS configuration instruction, switch the ASE state machine from the codec configuration completion state to the parameter configuration completion state (that is, switch from the Codec Configured state to the QoS Configured state), and save the CIS configuration parameters sent by the mobile phone.
  • the earphone can automatically set and update the priority attribute table (that is, the priority configuration information mentioned above) based on the user behavior data in the manner described in the above embodiments.
  • Step 801 when the notebook computer connected to the earphone wants to play media (such as music or video), the notebook computer will send an enable (Enable) command to the earphone, and the parameters carried in the activation command include that the audio stream type currently to be transmitted is media .
  • media such as music or video
  • Step 802 After receiving the activation instruction, the earphone switches the state machine ASE1_0 in the parameter configuration completed state (QoS_Configured state) to the connection establishment state (Enabling state).
  • step 803 the laptop computer sends an instruction to create a CIS to the headset, and a CIS connection is established between the laptop computer and the headset.
  • Step 804 after the headset preparation (configuration of CIS related parameters) is completed, it can automatically send the receiver start ready (Receiver Start Ready) command to itself, and switch the state machine ASE1_0 from the connection establishment state (Enabling state) to the transmitting state (Streaming state) status), and notify the laptop.
  • the receiver start ready Receiver Start Ready
  • the state machine ASE1_0 from the connection establishment state (Enabling state) to the transmitting state (Streaming state) status
  • Step 805 after receiving the notification, the notebook computer starts to transmit the audio stream data of the media.
  • Step 806 when the earphone is currently playing the media in the notebook computer, if the earphone receives an activation (Enable) instruction from the mobile phone, the priority attribute table is retrieved, if the audio stream type (such as a phone) priority in the activation instruction is greater than the current The type of audio stream to play, continue the following operations, otherwise ignore the activation command.
  • an activation (Enable) instruction from the mobile phone
  • the priority attribute table is retrieved, if the audio stream type (such as a phone) priority in the activation instruction is greater than the current The type of audio stream to play, continue the following operations, otherwise ignore the activation command.
  • step 807 the headset automatically sends a Disable command to itself, switches the state machine ASE1_0 from the streaming state (Streaming state) to the stopping transmission state (Disabling state), and terminates the CIS connection between the headset and the notebook.
  • Step 808 the earphone automatically sends the receiver stop ready (Receiver Stop Ready) command to itself, and switches the state machine ASE1_0 from the stop transmission state (Disabling state) to the parameter configuration completion state (QoS_Configured state).
  • the enable command is a standard command in LE Audio, which will carry parameters related to the context (Context) in the metadata (Metadata), which is defined as an audio stream type in this technical solution.
  • different priority attribute tables may be set for different scene modes.
  • the audio playback device determines the priority attribute table corresponding to the target scene mode from the multiple stored priority attribute tables. For example, for conference scenarios and non-conference scenarios, corresponding priority attribute tables are set respectively.
  • the priority attribute table corresponding to the meeting scene the priority of the media is higher than the priority of the phone; in the priority attribute table corresponding to the non-conference scene, the priority of the phone is higher than the priority of the media.
  • an appropriate priority attribute table can be used in different scenarios to determine the priority relationship between audio streams, so that the determination of the priority is adapted to the scenario and better meets the actual needs of the user.
  • the audio playback device in response to an instruction to enable the automatic switching function, enables the automatic switching function, which refers to the function of automatically switching audio streams for playback according to the priority configuration information that meets the available conditions; wherein , the available condition means that the priority configuration information is generated based on user behavior data within a specific period of time.
  • the user can independently choose to enable or disable the automatic switching function.
  • the audio playback device can determine the priority between different audio streams based on the priority configuration information in the manner described in the above embodiment. relationship; when the automatic switching function is turned off, the audio playback device can prompt the user to manually select different audio streams for switching and playing.
  • the priority configuration information can be used only when the available conditions are met, so as to ensure the accuracy of the priority configuration information and avoid audio switching control errors as much as possible. For example, if a person develops a habit in 21 days, after collecting 21 days of user behavior data, priority configuration information that meets the available conditions can be generated based on the 21 days of user behavior data, and then the user is prompted whether to enable the automatic switching function.
  • FIG. 9 shows a block diagram of an apparatus for configuring priorities for audio playback provided by an embodiment of the present application.
  • the device has the function of realizing the above-mentioned method example, and the function may be realized by hardware, or may be realized by executing corresponding software by the hardware.
  • the apparatus may be the audio playback device described above, or may be set in the audio playback device.
  • the device 900 includes: a mode determination module 910 , a data acquisition module 920 and a configuration update module 930 .
  • the mode determination module 910 is configured to determine the target scene mode of the audio playback device.
  • the data acquisition module 920 is configured to acquire user behavior data generated in the target scene mode, where the user behavior data includes data for controlling the audio playback device to switch and play different audio streams.
  • the configuration updating module 930 is configured to update priority configuration information corresponding to the target scene mode based on the user behavior data, where the priority configuration information is used to configure priority relationships among various audio streams.
  • the priority configuration information is used to configure the priority relationship between multiple audio stream types, and the priority configuration information includes: the identifier of the audio stream type, and the audio The priority value corresponding to the stream type identifier.
  • the configuration update module 930 is configured to: if the user behavior data includes data that controls the audio stream belonging to the target audio stream type and preempts other audio streams for playback, then update the corresponding audio stream of the target scene mode In the priority configuration information, the priority value corresponding to the target audio stream type is updated.
  • the priority configuration information is used to configure the priority relationship between multiple audio stream types, and to configure the priority of the same or different audio stream types of different connected devices of the audio playback device. level relationship.
  • the priority configuration information includes: the identifier of the connected device, the identifier of the audio stream type, and the identifier corresponding to the identifier of the connected device and the identifier of the audio stream type the priority value;
  • the priority configuration information includes: a combination identifier and a priority value corresponding to the combination identifier, wherein the combination identifier is used to distinguish between different combinations of the connection device and the audio stream type .
  • the configuration update module 930 is configured to, if the user behavior data includes data that controls the audio stream from the target connected device and belongs to the target audio stream type to preempt other audio streams for playback, then update the In the priority configuration information corresponding to the target scene mode, the priority value corresponding to the target audio stream type of the target connection device is updated.
  • the mode determining module 910 is configured to collect user voice data, identify the user voice data to determine the target scene mode; or collect environmental audio data, and perform Identify and determine the target scene mode; or, in response to a user operation on the audio playback device, determine the target scene mode based on the user operation; or receive scene switching information from the control device, and switch according to the scene information identifying the target scene mode.
  • the apparatus 900 further includes: a priority determination module 940 and a playback control module 950 .
  • the priority determination module 940 is configured to, during the process of playing the first audio stream in the target scene mode, if a playback instruction for the second audio stream is received, according to the priority configuration information corresponding to the target scene mode, A priority relationship between the first audio stream and the second audio stream is determined.
  • the play control module 950 is configured to pause playing the first audio stream and play the second audio stream if the priority of the second audio stream is higher than the priority of the first audio stream.
  • the playback control module 950 is configured to:
  • the first connection device is a device for transmitting the first audio stream to the audio playback device.
  • the playback control module 950 is further configured to switch the first state machine from the transmission stop state to parameter configuration after switching the first state machine from the transmission state to the transmission stop state finished condition.
  • the playback control module 950 is further configured to:
  • the second state machine being a state machine corresponding to the second audio stream
  • the second connection device is a device that transmits the second audio stream to the audio playback device
  • the device 900 further includes: a function enabling module 960 .
  • the function enabling module 960 is configured to enable the automatic switching function in response to the enabling instruction of the automatic switching function, and the automatic switching function refers to the function of automatically switching audio streams for playing according to the priority configuration information that meets the available conditions ;
  • the availability condition means that the priority configuration information is generated based on user behavior data within a specific period of time.
  • the audio playback device (such as a headset) can realize the setting of priority configuration information without the assistance of other devices.
  • the audio playback setting collects and records user behavior data
  • the user behavior data includes the data that controls the audio playback device to switch and play different audio streams, and then automatically updates the priority configuration information based on the user behavior data, realizing the automatic setting and updating of the priority configuration information without the assistance of other devices or Manual setting by the user fully reduces the complexity.
  • the corresponding priority configuration information is set respectively, so that when the audio playback device is in different scene modes, the audio stream can be played and controlled with the priority configuration information suitable for the current scene mode, so that Audio playback control is more accurate and better meets user needs.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • FIG. 11 shows a structural block diagram of an audio playback device provided by an embodiment of the present application.
  • the audio playback device can be used to realize the functions of the above-mentioned priority configuration method for audio playback.
  • the audio playback device 110 may include: a processor 111 , a receiver 112 , a transmitter 113 , a memory 114 and a bus 115 .
  • the processor 111 includes one or more processing cores, and the processor 101 executes various functional applications and information processing by running software programs and modules.
  • the receiver 112 and the transmitter 113 can be implemented as a communication component, which can be a communication chip.
  • the memory 114 is connected to the processor 111 through the bus 115 .
  • the memory 114 may be used to store a computer program, and the processor 111 is used to execute the computer program, so as to implement various steps in the foregoing method embodiments.
  • memory 114 can be realized by any type of volatile or nonvolatile storage device or their combination, and volatile or nonvolatile storage device includes but not limited to: RAM (Random-Access Memory, random access memory) And ROM (Read-Only Memory, read-only memory), EPROM (Erasable Programmable Read-Only Memory, erasable programmable read-only memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, electrically erasable programmable read-only memory memory), flash memory or other solid-state storage technology, CD-ROM (Compact Disc Read-Only Memory, CD-ROM), DVD (Digital Video Disc, high-density digital video disc) or other optical storage, tape cartridges, tapes, disks storage or other magnetic storage devices.
  • RAM Random-Access Memory, random access memory
  • ROM Read-Only Memory, read-only memory
  • EPROM Erasable Programmable Read-Only Memory, erasable programmable read-only memory
  • EEPROM Electrically Eras
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium.
  • the computer program is executed by a processor of an audio playback device, the above-mentioned requirements for audio playback are realized. priority configuration method.
  • the computer-readable storage medium may include: ROM, RAM, SSD (Solid State Drives, solid state disk) or optical disc, etc.
  • the random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the audio playback device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the audio playback device executes the above audio playback priority configuration method.
  • the "plurality” mentioned herein refers to two or more than two.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character "/” generally indicates that the contextual objects are an "or” relationship.
  • the numbering of the steps described herein only exemplarily shows a possible sequence of execution among the steps. In some other embodiments, the above-mentioned steps may not be executed according to the order of the numbers, such as two different numbers The steps are executed at the same time, or two steps with different numbers are executed in the reverse order as shown in the illustration, which is not limited in this embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)

Abstract

L'invention concerne un procédé et un appareil de configuration de priorité pour lecture audio, ainsi qu'un dispositif et un support de stockage, qui se rapportent au domaine technique de la lecture de média. Le procédé comporte les étapes consistant à: déterminer un mode de scénario cible d'un dispositif de lecture audio (410); acquérir des données de comportement d'utilisateur générées dans le mode de scénario cible, les données de comportement d'utilisateur comportant des données servant à commander le dispositif de lecture audio pour permuter la lecture de différents flux audio (420); et d'après les données de comportement d'utilisateur, mettre à jour des informations de configuration de priorité correspondant au mode de scénario cible, les informations de configuration de priorité étant utilisées pour configurer la relation de priorité entre divers flux audio (430). Au moyen du procédé, la spécification et la mise à jour automatiques d'informations de configuration de priorité sont réalisées sans l'assistance d'autres dispositifs et sans réglage manuel par un utilisateur, ce qui réduit complètement la complexité. De plus, pour différents modes de scénario, des informations correspondantes de configuration de priorité sont spécifiées, de sorte que la commande de lecture audio est plus précise, et que les besoins des utilisateurs sont mieux satisfaits.
PCT/CN2022/105703 2021-07-22 2022-07-14 Procédé et appareil de configuration de priorité pour lecture audio, et dispositif et support de stockage WO2023001054A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110831393.0 2021-07-22
CN202110831393.0A CN115686424A (zh) 2021-07-22 2021-07-22 针对音频播放的优先级配置方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023001054A1 true WO2023001054A1 (fr) 2023-01-26

Family

ID=84978884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105703 WO2023001054A1 (fr) 2021-07-22 2022-07-14 Procédé et appareil de configuration de priorité pour lecture audio, et dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN115686424A (fr)
WO (1) WO2023001054A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835419A (zh) * 2023-02-14 2023-03-21 深圳市友恺通信技术有限公司 一种基于物联网的无线设备状态监测系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116017388B (zh) * 2023-02-17 2023-08-22 荣耀终端有限公司 一种基于音频业务的弹窗显示方法和电子设备
CN117858031B (zh) * 2024-03-07 2024-05-28 深圳市汇杰芯科技有限公司 一种低延时无线对讲和tws无缝切换系统、方法及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550037A (zh) * 2015-12-11 2016-05-04 北京元心科技有限公司 多系统中分配音频资源的方法及装置
CN109525707A (zh) * 2018-10-15 2019-03-26 维沃移动通信有限公司 一种音频播放方法及移动终端
CN109890021A (zh) * 2019-03-06 2019-06-14 西安易朴通讯技术有限公司 蓝牙耳机切换方法、蓝牙耳机及终端
US20210193158A1 (en) * 2019-12-23 2021-06-24 Motorola Solutions, Inc. Device and method for controlling a speaker according to priority data
CN113050910A (zh) * 2019-12-26 2021-06-29 阿里巴巴集团控股有限公司 语音交互方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550037A (zh) * 2015-12-11 2016-05-04 北京元心科技有限公司 多系统中分配音频资源的方法及装置
CN109525707A (zh) * 2018-10-15 2019-03-26 维沃移动通信有限公司 一种音频播放方法及移动终端
CN109890021A (zh) * 2019-03-06 2019-06-14 西安易朴通讯技术有限公司 蓝牙耳机切换方法、蓝牙耳机及终端
US20210193158A1 (en) * 2019-12-23 2021-06-24 Motorola Solutions, Inc. Device and method for controlling a speaker according to priority data
CN113050910A (zh) * 2019-12-26 2021-06-29 阿里巴巴集团控股有限公司 语音交互方法、装置、设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835419A (zh) * 2023-02-14 2023-03-21 深圳市友恺通信技术有限公司 一种基于物联网的无线设备状态监测系统及方法

Also Published As

Publication number Publication date
CN115686424A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
WO2023001054A1 (fr) Procédé et appareil de configuration de priorité pour lecture audio, et dispositif et support de stockage
US11812485B2 (en) Bluetooth communication method and terminal
US9363352B2 (en) Duplex audio for mobile communication device and accessory
US20100048133A1 (en) Audio data flow input/output method and system
US9652195B2 (en) Audio output device that utilizes policies to concurrently handle multiple audio streams from different source devices
CN110324685B (zh) 管理回放组
WO2013060109A1 (fr) Procédé de traitement pour un appel de terminal, terminal et système de traitement
US11875085B2 (en) Audio rendering device and audio configurator device for audio stream selection, and related methods
US10236016B1 (en) Peripheral-based selection of audio sources
CN113766477A (zh) 设备连接方法、装置、电子设备及计算机可读介质
CN105491252A (zh) 一种切换彩铃的方法装置
WO2021169472A1 (fr) Procédé de transfert d'appel vocal et dispositif électronique
CN105183446A (zh) 音频管理系统
WO2017148270A1 (fr) Procédé et dispositif de commande de volume, et terminal
WO2018028239A1 (fr) Procédé et appareil de commande de terminal, et support de stockage informatique
WO2023109156A1 (fr) Procédé et dispositif de projection d'écran, et support de stockage
WO2022057552A1 (fr) Système de commande de dispositif
WO2024119947A1 (fr) Procédé et appareil de communication bluetooth, dispositif électronique et support lisible par ordinateur
CN113760219A (zh) 信息处理方法和装置
US10827271B1 (en) Backward compatibility for audio systems and methods
CN111580781A (zh) 一种移动终端音频输出方法及移动终端
WO2023035918A1 (fr) Procédé et appareil de commande de lecture audio, et dispositif de sortie audio et support de stockage
WO2023061273A1 (fr) Procédé et appareil de connexion de dispositif, et dispositif électronique et support de stockage
CN105007522B (zh) 一种播放场景管理方法、系统、播放终端及控制终端
CN112965685B (zh) 音频控制方法、装置、系统、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845219

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22845219

Country of ref document: EP

Kind code of ref document: A1