CN114077412A - Data processing method and related equipment - Google Patents

Data processing method and related equipment Download PDF

Info

Publication number
CN114077412A
CN114077412A CN202010818113.8A CN202010818113A CN114077412A CN 114077412 A CN114077412 A CN 114077412A CN 202010818113 A CN202010818113 A CN 202010818113A CN 114077412 A CN114077412 A CN 114077412A
Authority
CN
China
Prior art keywords
value
numerical value
equipment
play
weight coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010818113.8A
Other languages
Chinese (zh)
Inventor
许振强
谢靖然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010818113.8A priority Critical patent/CN114077412A/en
Priority to PCT/CN2021/107582 priority patent/WO2022033282A1/en
Publication of CN114077412A publication Critical patent/CN114077412A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files

Abstract

The application discloses a data processing method, which is applied to the field of artificial intelligence, in particular to the field of multi-device cooperation, and comprises the following steps: the first device can determine a second numerical value used for playing the target signal on the second device according to the first numerical value and the weight coefficient, determine the numerical value used for playing the target signal by the second device by adopting a method of the weight coefficient, and send the second numerical value to the second device, so that the switched second device can automatically adjust the first quantity to a relative numerical value (namely, the second numerical value) of the first device, the perception of the difference of the numerical values in the switching process of a user is reduced, the switching of the second device is carried out comfortably, and the user experience is improved.

Description

Data processing method and related equipment
Technical Field
The embodiment of the application relates to the field of terminal artificial intelligence, in particular to a data processing method and related equipment.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision and reasoning, human-computer interaction, recommendation and search, AI basic theory, and the like.
In the modern information society with the rapid development of wireless communication network, mobile phones have become one of the most popular communication tools for the public. The mobile phone can make the user carry out wireless communication at any time and any place, and conveniently carry out voice communication. In addition, the applications provided by the mobile phones are becoming more and more extensive, and users can use the mobile phones to realize multimedia functions such as watching movies, playing music, watching news, recording videos and audio.
With the increasing types of sound output devices of terminals, mobile phones can also output audio through external sound output devices (power amplifiers, bluetooth devices, USB speakers, etc.). Thus, when the mobile phone is switched from the loudspeaker to other sound output devices (for example, from the loudspeaker to bluetooth) for audio output, the mobile phone still uses the volume gain corresponding to the loudspeaker to adjust the output volume of the switched sound output devices.
However, each sound output device has different device attributes and may have different influences on the same audio data, so when the mobile phone uses the same sound gain to adjust the output volume when different sound output devices output the same audio, the output volume when different sound output devices output the same audio may be different, and further, when a user switches the sound output devices at a terminal, the user needs to manually adjust the volume of the mobile phone to meet the user's requirements for the volume, which brings great inconvenience to the user.
Disclosure of Invention
The embodiment of the application provides a data processing method and related equipment. The method can be applied to the AI field, particularly to the sub-field multi-device cooperation field, and is used for determining the appropriate playing parameters of the second device and improving the user experience.
An exemplary embodiment of the present application provides an application scenario to which a data processing method may be applied: a user needs to play multimedia audio on a mobile phone device, for example: music, video, talk, etc. In the audio playing process, the second equipment is switched through Bluetooth, transmission by touch and screen projection by App, the volume of the switched second equipment can be automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, and the second equipment is switched comfortably.
Exemplary scenarios:
1. the user needs to convert the music played by the mobile phone to the bluetooth headset for playing.
2. The user needs to convert music, video or images played by the mobile phone to a television, a computer or a projection screen for playing.
3. The user needs to convert music, text or images played by the mobile phone to the smart watch for playing.
It should be understood that the above three examples should not impose any limitation on the application scenarios of the embodiments of the present application.
A first aspect of an embodiment of the present application provides a data processing method, where the method includes: the method comprises the steps that first equipment obtains a first numerical value used by a target signal played by the first equipment and the equipment type of second equipment; the first device determines a weight coefficient of the second device; the first equipment determines a second numerical value used by the target signal to be played in the second equipment according to the weight coefficient and the first numerical value; the first device sends the second value to the second device to enable the second device to play the target signal by using the second value.
In the embodiment of the application, the first device may determine, according to the first numerical value and the weight coefficient, a second numerical value used for playing the target signal on the second device, determine, by using a method of the weight coefficient, the numerical value used for playing the target signal by the second device, and send the second numerical value to the second device, so that the second device after switching automatically may adjust the first number to a relative volume (i.e., the second numerical value) of the first device, thereby reducing perception of a difference in magnitude of values in switching by a user, performing comfortable switching of the second device, and improving user experience.
Optionally, in a possible implementation manner of the first aspect, the determining, by the first device, a weight coefficient of the second device in the above step includes: the first device determines a weight coefficient of the second device according to a first mapping table, wherein the first mapping table is used for representing an association relationship between the device type and the weight coefficient of the second device.
In this possible implementation manner, the weight coefficient of the second device is determined according to the first mapping table, and optionally, the first mapping table is combined with human factor engineering analysis, so that the user experience of the user when switching audio playing is improved.
Optionally, in a possible implementation manner of the first aspect, the step further includes: the first equipment receives a third numerical value sent by the second equipment, wherein the third numerical value is obtained after the second numerical value is adjusted; and the first equipment updates the first mapping table according to the third value to obtain a second mapping table, wherein the weight coefficient of the second equipment in the second mapping table is different from the weight coefficient of the second equipment in the first mapping table.
In this possible implementation manner, the first device updates the first mapping table according to the third value, that is, the first device may update the first mapping table through subsequent user operations, which better conforms to the usage scenario of switching between the first device and the second device.
Optionally, in a possible implementation manner of the first aspect, the step further includes: the method comprises the steps that a first device obtains target characteristics, wherein the target characteristics comprise at least one of a first numerical value, a playing place of a second device, system time of the second device and a third numerical value; and the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
In this possible implementation manner, the first device may perform model training according to target features such as the playing location of the second device, the third value adjusted by the user, and the system time of the second device when the user adjusts the third value, so as to obtain a prediction weight model more suitable for the user's usage habit and in a specific scene.
Optionally, in a possible implementation manner of the first aspect, the step further includes: the first equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model; the first device sends the fourth value to the second device.
In this possible implementation manner, the first device obtains, according to the trained prediction weight model, a value requirement that meets the user in a specific scene. And a fourth numerical value is sent to the second equipment, so that the second equipment uses the fourth numerical value which meets the user requirements better to play the target signal, the perception of the user on the numerical value difference in switching is further reduced, and the user experience is improved.
Optionally, in a possible implementation manner of the first aspect, the target signal in the above step includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this possible implementation, the user needs to play the multimedia audio in the mobile phone device, for example: music, video, talk, etc. In the audio playing process, the media playing equipment is switched through Bluetooth, instant transmission by touch and App screen projection, the volume of the switched second equipment can be automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, and the second equipment is switched comfortably.
A second aspect of the embodiments of the present application provides a data processing method, including: the second device receives a target signal; the second device receives a second numerical value sent by the first device, wherein the second numerical value is obtained by the first device according to a first numerical value, a device type of the second device and a first mapping table, the first numerical value is a numerical value used by the first device for playing a target signal, the first mapping table is used for representing an association relation between the device type of the second device and a weight coefficient, and the weight coefficient is used for obtaining the second numerical value according to the first numerical value; the second device plays the target signal using the second value.
In the embodiment of the application, the second device may obtain a second value corresponding to the first value, that is, the first device determines, by using a method of a weight coefficient, a value used by the second device to play the target signal, and sends the second value to the second device, and the second device after switching may automatically adjust the first quantity to a relative volume (that is, the second value) of the first device, so that a user may perceive a difference in magnitude of the values during switching, and the second device may be switched comfortably, thereby improving user experience.
Optionally, in a possible implementation manner of the second aspect, the step further includes: the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted; and the second device sends a third value to the first device so that the first device updates the first mapping table to obtain a second mapping table, wherein the weight coefficient of the second device in the second mapping table is different from the weight coefficient of the second device in the first mapping table.
In this possible implementation manner, after receiving the adjustment value of the user, the second device sends the adjustment value of the user to the first device, so that the first device can update the first mapping table according to the use habit of the user, and the value used by the second device to play the target signal better conforms to the switched use scenario between the first device and the second device. The perception that the user is different to the numerical value size in switching is further reduced, the second equipment is switched comfortably, and user experience is improved.
Optionally, in a possible implementation manner of the second aspect, the target signal in the above step includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
A third aspect of the embodiments of the present application provides a data processing method, including: the method comprises the steps that first equipment obtains a first numerical value used by a target signal played by the first equipment and the equipment type of second equipment; the first device determines a weight coefficient of the second device; the first device sends the first value and the weight coefficient to the second device, so that the second device determines a second value of the second device according to the first value and the weight coefficient.
In this embodiment of the application, the second device may determine, according to the first value and the weight coefficient, a second value used for playing the target signal on the second device, and determine, by using a method of using the weight coefficient, the value used for playing the target signal by the second device, in one aspect: the second equipment after switching can automatically adjust the first quantity to the relative volume (namely, the second numerical value) of the first equipment, the perception that a user has difference in numerical value in switching is reduced, the second equipment is switched comfortably, and user experience is improved. On the other hand: the power consumption problem caused when the first device calculates the second numerical value is reduced.
Optionally, in a possible implementation manner of the third aspect, the determining, by the first device, a weight coefficient of the second device in the above step includes: the first device determines a weight coefficient of the second device according to a first mapping table, wherein the first mapping table is used for representing an association relationship between the device type and the weight coefficient of the second device.
In this possible implementation manner, the weight coefficient of the second device is determined according to the first mapping table, and optionally, the first mapping table is combined with human factor engineering analysis, so that the user experience of the user when switching audio playing is improved.
Optionally, in a possible implementation manner of the third aspect, the step further includes: the first equipment receives a third numerical value sent by the second equipment, wherein the third numerical value is obtained after the second numerical value is adjusted; and the first equipment updates the first mapping table according to the third value to obtain a second mapping table, wherein the weight coefficient of the second equipment in the second mapping table is different from the weight coefficient of the second equipment in the first mapping table.
In this possible implementation manner, the first device updates the first mapping table according to the third value, that is, the first device may update the first mapping table through adjustment of a user, and the first mapping table better conforms to a usage scenario of switching between the first device and the second device.
Optionally, in a possible implementation manner of the third aspect, the step further includes: the method comprises the steps that a first device obtains target characteristics, wherein the target characteristics comprise at least one of a first numerical value, a playing place of a second device, system time of the second device and a third numerical value; and the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
In this possible implementation manner, the first device may perform model training according to target features such as the playing location of the second device, the third value adjusted by the user, and the system time of the second device when the user adjusts the third value, so as to obtain a prediction weight model more suitable for the user's usage habit and in a specific scene.
Optionally, in a possible implementation manner of the third aspect, the step further includes: the first equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model; the first device sends the fourth value to the second device.
In the possible implementation manner, the first device obtains the value requirement meeting the user requirement in the specific scene according to the trained prediction weight model, and sends the fourth value to the second device, so that the second device uses the fourth value meeting the user requirement to play the target signal, the perception of the user on the value difference in the switching process is further reduced, and the user experience is improved.
Optionally, in a possible implementation manner of the third aspect, the target signal in the above step includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this possible implementation, the user needs to play the multimedia audio in the mobile phone device, for example: music, video, talk, etc. In the audio playing process, the media playing equipment is switched through Bluetooth, instant transmission by touch and App screen projection, the volume of the switched second equipment can be automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, and the second equipment is switched comfortably.
A fourth aspect of the embodiments of the present application provides a data processing method, including: the second device receives a target signal; the second equipment receives a first numerical value and a weight coefficient which are sent by the first equipment, wherein the first numerical value is a numerical value used when the first equipment plays the target signal; the second device determines a second numerical value according to the first numerical value and the weight coefficient; the second device plays the target signal using the second value.
In this embodiment of the application, the second device may determine, according to the first value and the weight coefficient, a second value used for playing the target signal on the second device, and determine, by using a method of using the weight coefficient, the value used for playing the target signal by the second device, in one aspect: the second equipment after switching can automatically adjust the first quantity to the relative volume (namely, the second numerical value) of the first equipment, the perception that a user has difference in numerical value in switching is reduced, the second equipment is switched comfortably, and user experience is improved. On the other hand: the power consumption problem caused when the first device calculates the second numerical value is reduced.
Optionally, in a possible implementation manner of the fourth aspect, the determining, by the second device in the foregoing step, the second value according to the first value and the weight coefficient includes: the second device calculates a product of the first value and the weight coefficient to obtain a second value.
In this possible implementation manner, when the second device has the calculation capability, the second value is obtained by calculating the product of the first value and the weight coefficient, so that the power consumption problem caused when the first device calculates the second value is reduced.
Optionally, in a possible implementation manner of the fourth aspect, the step further includes: the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted; the second equipment plays the target signal by using a third numerical value; and the second device sends a third value to the first device, so that the first device updates the weight coefficient of the second device in the first mapping table by using the third value.
In this possible implementation manner, after obtaining the adjustment value of the user, the second device sends the adjustment value to the first device, so that the first device can update the first mapping table through the adjustment of the user, and the switching usage scenario between the first device and the second device is better met.
Optionally, in a possible implementation manner of the fourth aspect, the step further includes: the second equipment acquires target characteristics, wherein the target characteristics comprise at least one of a playing place of the second equipment, system time of the second equipment and a third numerical value; and the second equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
In this possible implementation manner, the second device may perform model training according to target features such as a playing location of the second device, a third value adjusted by a user, and system time of the second device when the user adjusts the third value, on one hand: the prediction weight model more suitable for the use habit of the user and the specific scene can be obtained. On the other hand: and the power consumption problem caused by the first equipment training model is reduced.
Optionally, in a possible implementation manner of the fourth aspect, the step further includes: the second equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model; the second device plays the target signal using the fourth value.
In this possible implementation manner, the second device obtains the value requirement of the user in the specific scene according to the trained prediction weight model, and uses the fourth value which more meets the user requirement to play the target signal, so that the perception of the user on the value difference in the switching process is further reduced, and the user experience is improved.
Optionally, in a possible implementation manner of the fourth aspect, the target signal in the above step includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this possible implementation, the user needs to play the multimedia audio in the mobile phone device, for example: music, video, talk, etc. In the audio playing process, the media playing equipment is switched through Bluetooth, instant transmission by touch and App screen projection, the volume of the switched second equipment can be automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, and the second equipment is switched comfortably.
A fifth aspect of the embodiments of the present application provides a data processing method, including: the second device receives a target signal; the second equipment receives a first numerical value from the first equipment, wherein the first numerical value is a numerical value used by the first equipment for playing the target signal; the second device determines a weight coefficient; the second device determines a second numerical value according to the first numerical value and the weight coefficient; the second device plays the target signal using the second value.
In this embodiment of the application, the second device may obtain, according to the received first value and the determined weight coefficient, a second value used for playing the target signal on the second device, and determine, by using a method of using the weight coefficient, the value used for playing the target signal by the second device, in one aspect: the second equipment after switching can automatically adjust the first quantity to the relative volume (namely, the second numerical value) of the first equipment, the perception that a user has difference in numerical value in switching is reduced, the second equipment is switched comfortably, and user experience is improved. On the other hand: the power consumption problem caused when the first device calculates the second numerical value is reduced.
Optionally, in a possible implementation manner of the fifth aspect, the determining, by the second device, the weight coefficient includes: the second device determines the weight coefficient according to a first mapping table, wherein the first mapping table is used for representing the association relationship between the device type of the second device and the weight coefficient.
In this possible implementation, on the one hand: the second device determines a weight coefficient of the second device according to the first mapping table, and optionally, the first mapping table combines human factor engineering analysis, so that the user experience of the user when switching audio playing is improved. On the other hand: the second device can store the first mapping table, so that memory occupation caused by the first device storing the first mapping table and power consumption problems caused by the first device calculating the second numerical value are reduced.
Optionally, in a possible implementation manner of the fifth aspect, the step further includes: the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted; the second device plays the target signal using the third value.
In this possible implementation manner, the second device may use the value according with the habit of the user to play the target signal according to the adjustment of the user, so as to improve the user experience.
Optionally, in a possible implementation manner of the fifth aspect, the step further includes: and the second device updates the first mapping table by using the third value to obtain a second mapping table, wherein the weight coefficient of the second device in the second mapping table is different from the weight coefficient of the second device in the first mapping table.
In this possible implementation manner, after receiving the adjustment value of the user, the second device updates the first mapping table according to the usage habit of the user, so that the value used by the second device to play the target signal better conforms to the switched usage scenario between the first device and the second device. The perception that the user is different to the numerical value size in switching is further reduced, the second equipment is switched comfortably, and user experience is improved.
Optionally, in a possible implementation manner of the fifth aspect, the step further includes: the second equipment acquires target characteristics, wherein the target characteristics comprise at least one of a playing place of the second equipment, system time of the second equipment and a third numerical value; and the second equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
In this possible implementation manner, the second device may perform model training according to target features such as a playing location of the second device, a third value adjusted by a user, and system time of the second device when the user adjusts the third value, on one hand: the prediction weight model more suitable for the use habit of the user and the specific scene can be obtained. On the other hand: and the power consumption problem caused by the first equipment training model is reduced.
Optionally, in a possible implementation manner of the fifth aspect, the step further includes: the second equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model; the second device plays the target signal using the fourth value.
In this possible implementation manner, the second device obtains the value requirement of the user in the specific scene according to the trained prediction weight model, and uses the fourth value which more meets the user requirement to play the target signal, so that the perception of the user on the value difference in the switching process is further reduced, and the user experience is improved.
Optionally, in a possible implementation manner of the fifth aspect, the target signal in the above step includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this possible implementation, the user needs to play the multimedia audio in the mobile phone device, for example: music, video, talk, etc. In the audio playing process, the media playing equipment is switched through Bluetooth, instant transmission by touch and App screen projection, the volume of the switched second equipment can be automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, and the second equipment is switched comfortably.
A sixth aspect of the present application provides a first device, which may be a terminal device. Or may be a component of a terminal device (e.g., a processor, chip, or system of chips), the first device comprising:
the receiving and sending unit is used for acquiring a first numerical value used by the target signal played by the first equipment and the equipment type of the second equipment;
a processing unit for determining a weight coefficient of the second device;
the processing unit is further used for determining a second numerical value used by the target signal played on the second equipment according to the weight coefficient and the first numerical value;
and the transceiving unit is further used for sending the second numerical value to the second device so that the second device can play the target signal by using the second numerical value.
Optionally, in a possible implementation manner of the sixth aspect, the processing unit is specifically configured to determine a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Optionally, in a possible implementation manner of the sixth aspect, the transceiver unit is further configured to receive a third value sent by the second device, where the third value is a value obtained by adjusting the second value;
and the processing unit is further configured to update the first mapping table according to the third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, in a possible implementation manner of the sixth aspect, the transceiver unit is further configured to obtain a target feature, where the target feature includes at least one of the first numerical value, a playing location of the second device, a system time of the second device, and a third numerical value;
and the processing unit is also used for training the model to be trained according to the target characteristics to obtain a prediction weight model.
Optionally, in a possible implementation manner of the sixth aspect, the processing unit is further configured to obtain a fourth value according to the first value and the prediction weight model;
and the transceiving unit is further used for sending the fourth numerical value to the second equipment.
Optionally, in a possible implementation manner of the sixth aspect, the target signal includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
A seventh aspect of the present application provides a second device, which may be a smart watch, a television, a computer, or a projection screen, etc. Or may be a component (e.g., a processor, chip or system of chips) of a smart watch, television, computer or projection screen, etc., the second device comprising:
a transceiving unit for receiving a target signal;
the receiving and sending unit is further configured to receive a second numerical value sent by the first device, where the second numerical value is obtained by the first device according to the first numerical value, a device type of the second device, and a first mapping table, the first numerical value is a numerical value used by the first device to play a target signal, the first mapping table is used to represent an association relationship between the device type of the second device and a weight coefficient, and the weight coefficient is used to obtain the second numerical value according to the first numerical value;
and the processing unit is used for playing the target signal by using the second numerical value.
Optionally, in a possible implementation manner of the seventh aspect, the transceiver unit is further configured to obtain a third value, where the third value is obtained after the second value is adjusted;
the transceiving unit is further configured to send a third value to the first device, so that the first device updates the first mapping table to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, in a possible implementation manner of the seventh aspect, the target signal includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
An eighth aspect of the present application provides a first device, which may be a terminal device. Or may be a component of a terminal device (e.g., a processor, chip, or system of chips), the first device comprising:
the receiving and sending unit is used for acquiring a first numerical value used by the target signal played by the first equipment and the equipment type of the second equipment;
a processing unit for determining a weight coefficient of the second device;
the transceiving unit is further configured to send the first value and the weight coefficient to the second device, so that the second device determines a second value of the second device according to the first value and the weight coefficient.
Optionally, in a possible implementation manner of the eighth aspect, the processing unit is specifically configured to determine a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Alternatively, in one possible implementation manner of the eighth aspect,
the transceiver unit is further configured to receive a third numerical value sent by the second device, where the third numerical value is obtained after the second numerical value is adjusted;
and the processing unit is further configured to update the first mapping table according to the third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, in a possible implementation manner of the eighth aspect, the transceiver unit is further configured to obtain a target feature, where the target feature includes at least one of the first numerical value, a playing location of the second device, a system time of the second device, and a third numerical value;
and the processing unit is also used for training the model to be trained according to the target characteristics to obtain a prediction weight model.
Optionally, in a possible implementation manner of the eighth aspect, the processing unit is further configured to obtain a fourth value according to the first value and the prediction weight model;
and the transceiving unit is further used for sending the fourth numerical value to the second equipment.
Optionally, in a possible implementation manner of the eighth aspect, the target signal includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
A ninth aspect of the present application provides a second device, which may be a smart watch, a television, a computer, or a projection screen, etc. Or may be a component (e.g., a processor, chip or system of chips) of a smart watch, television, computer or projection screen, etc., the second device comprising:
a transceiving unit for receiving a target signal;
the receiving and sending unit is further used for receiving a first numerical value and a weight coefficient sent by the first equipment, wherein the first numerical value is a numerical value used when the first equipment plays the target signal;
the processing unit is used for determining a second numerical value according to the first numerical value and the weighting coefficient;
and the processing unit is also used for playing the target signal by using the second numerical value.
Optionally, in a possible implementation manner of the ninth aspect, the processing unit is specifically configured to calculate a product of the first numerical value and the weight coefficient to obtain the second numerical value.
Optionally, in a possible implementation manner of the ninth aspect, the transceiver unit is further configured to obtain a third numerical value, where the third numerical value is obtained after the second numerical value is adjusted;
the processing unit is also used for playing the target signal by using a third numerical value;
the transceiving unit is further configured to send a third value to the first device, so that the first device updates the weight coefficient of the second device in the first mapping table by using the third value.
Optionally, in a possible implementation manner of the ninth aspect, the transceiver unit is configured to acquire a target feature, where the target feature includes at least one of a playing location of the second device, a system time of the second device, and a third value;
and the processing unit is used for training the model to be trained according to the target characteristics to obtain a prediction weight model.
Optionally, in a possible implementation manner of the ninth aspect, the processing unit is further configured to obtain a fourth value according to the first value and the prediction weight model;
and the processing unit is also used for playing the target signal by using the fourth numerical value.
Optionally, in a possible implementation manner of the ninth aspect, the target signal includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
A tenth aspect of the present application provides a second device, which may be a smart watch, a television, a computer, or a projection screen, etc. Or may be a component (e.g., a processor, chip or system of chips) of a smart watch, television, computer or projection screen, etc., the second device comprising:
a transceiving unit for receiving a target signal;
the receiving and sending unit is further used for receiving a first numerical value from the first equipment, wherein the first numerical value is a numerical value used by the first equipment for playing the target signal;
a processing unit for determining a weight coefficient;
the processing unit is further used for determining a second numerical value according to the first numerical value and the weighting coefficient;
and the processing unit is also used for playing the target signal by using the second numerical value.
Optionally, in a possible implementation manner of the tenth aspect, the processing unit is specifically configured to determine the weight coefficient according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Optionally, in a possible implementation manner of the tenth aspect, the transceiver unit is further configured to obtain a third numerical value, where the third numerical value is obtained after the second numerical value is adjusted;
and the processing unit is also used for playing the target signal by using the third numerical value.
Optionally, in a possible implementation manner of the tenth aspect, the processing unit is further configured to update the first mapping table by using a third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, in a possible implementation manner of the tenth aspect, the transceiver unit is further configured to obtain a target feature, where the target feature includes at least one of a playing location of the second device, a system time of the second device, and a third value;
and the processing unit is also used for training the model to be trained according to the target characteristics to obtain a prediction weight model.
Optionally, in a possible implementation manner of the tenth aspect, the processing unit is further configured to obtain a fourth value according to the first value and the prediction weight model;
and the processing unit is also used for playing the target signal by using the fourth numerical value.
Optionally, in a possible implementation manner of the tenth aspect, the target signal includes an audio signal, the first value includes a volume value used by the audio signal to be played on the first device, and the second value includes a volume value used by the audio signal to be played on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
An eleventh aspect of embodiments of the present application provides a first device, which may be a terminal device. Or may be a component (e.g. a processor, a chip, or a system of chips) of a terminal device, which first device performs the method of the first aspect, any possible implementation manner of the first aspect, or any possible implementation manner of the third aspect, or the third aspect.
A twelfth aspect of an embodiment of the present application provides a second device, where the second device may be a smart watch, a television, a computer, or a projection screen. Or may be a component (e.g., a processor, a chip, or a system of chips) of a smart watch, a television, a computer, or a projection screen, etc., that performs the method of the second aspect, any possible implementation of the second aspect, or the fourth aspect, any possible implementation of the fourth aspect, or the fifth aspect, or any possible implementation of the fifth aspect.
A thirteenth aspect of the present application provides a first apparatus comprising: a processor coupled to a memory, the memory being configured to store a program or instructions that, when executed by the processor, cause the first device to implement the method of the first aspect or any possible implementation of the first aspect, or cause the first device to implement the method of the third aspect or any possible implementation of the third aspect.
A fourteenth aspect of the present application provides a second apparatus comprising: a processor coupled to a memory, the memory being configured to store a program or instructions that, when executed by the processor, cause the second device to implement the method of the second aspect or any possible implementation of the second aspect, or cause the first device to implement the method of the fourth aspect or any possible implementation of the fourth aspect, or cause the first device to implement the method of any possible implementation of the fifth aspect or any possible implementation of the fifth aspect.
A fifteenth aspect of embodiments of the present application provides a computer-readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the foregoing first aspect or any possible implementation manner of the first aspect, any possible implementation manner of the second aspect or the second aspect, any possible implementation manner of the third aspect or the third aspect, any possible implementation manner of the fourth aspect or the fourth aspect, or any possible implementation manner of the fifth aspect or the fifth aspect.
A sixteenth aspect of embodiments of the present application provides a computer program product, which when executed on a computer, causes the computer to perform the method in the foregoing first aspect or any possible implementation manner of the first aspect, the second aspect or any possible implementation manner of the second aspect, the third aspect or any possible implementation manner of the third aspect, the fourth aspect or any possible implementation manner of the fourth aspect, the fifth aspect or any possible implementation manner of the fifth aspect.
A seventeenth aspect of embodiments of the present application provides a communication system, including the first device provided in the eleventh or thirteenth aspect, and the second device of the twelfth or fourteenth aspect.
For technical effects brought by the sixth, eleventh, thirteenth, fifteenth, sixteenth, seventeenth aspects or any one of possible implementation manners of the sixth, eleventh, thirteenth, fifteenth, sixteenth, seventeenth aspects, reference may be made to technical effects brought by the first aspect or different possible implementation manners of the first aspect, and details are not described here.
For technical effects brought by the seventh, twelfth, fourteenth, fifteenth, sixteenth, seventeenth aspects or any one of possible implementation manners of the seventh aspect, reference may be made to technical effects brought by different possible implementation manners of the second aspect or the second aspect, and details are not described here.
For technical effects brought by any one or any one of the eighth, eleventh, thirteenth, fifteenth, sixteenth, seventeenth aspects of the present invention, reference may be made to technical effects brought by different possible implementations of the third aspect or the third aspect, and details are not described here again.
For technical effects brought by the ninth, twelfth, fourteenth, fifteenth, sixteenth, seventeenth aspect or any one of possible implementation manners of the ninth, twelfth, fourteenth, fifteenth, sixteenth, seventeenth aspect, reference may be made to technical effects brought by different possible implementation manners of the fourth aspect or the fourth aspect, and details are not described here.
For technical effects brought by the tenth, twelfth, fourteenth, fifteenth, sixteenth, seventeenth aspects or any one of possible implementation manners of the tenth, twelfth, fourteenth, fifteenth, sixteenth, seventeenth aspects, reference may be made to technical effects brought by different possible implementation manners of the fifth aspect or the fifth aspect, and details are not described here.
According to the technical scheme, the embodiment of the application has the following advantages: in the embodiment of the application, the first device may determine, according to the first numerical value and the weight coefficient, a second numerical value used for playing the target signal on the second device, determine, by using a method of the weight coefficient, the numerical value used for playing the target signal by the second device, and send the second numerical value to the second device, so that the second device after switching automatically may adjust the first number to a relative volume (i.e., the second numerical value) of the first device, thereby reducing perception of a difference in magnitude of values in switching by a user, performing comfortable switching of the second device, and improving user experience.
Drawings
FIG. 1 is a schematic structural diagram of an artificial intelligence body framework;
FIG. 2 is a schematic diagram of an application environment according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4 is another schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 5 is another schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 6 is another schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a first apparatus in an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a second apparatus in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a communication device in an embodiment of the present application;
fig. 10 is a schematic structural diagram of another communication device in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a data processing method and related equipment. The method and the device are used for determining the appropriate playing parameters of the second equipment, and user experience is improved.
The embodiments of the present invention will be described below with reference to the drawings. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The general workflow of the artificial intelligence system is described first, please refer to fig. 1, fig. 1 is a schematic structural diagram of an artificial intelligence body framework, and the artificial intelligence body framework is explained below from two dimensions of an "intelligent information chain" (horizontal axis) and an "IT value chain" (vertical axis). Where "intelligent information chain" reflects a list of processes processed from the acquisition of data. For example, the general processes of intelligent information perception, intelligent information representation and formation, intelligent reasoning, intelligent decision making and intelligent execution and output can be realized. In this process, the data undergoes a "data-information-knowledge-wisdom" refinement process. The 'IT value chain' reflects the value of the artificial intelligence to the information technology industry from the bottom infrastructure of the human intelligence, information (realization of providing and processing technology) to the industrial ecological process of the system.
(1) Infrastructure
The infrastructure provides computing power support for the artificial intelligent system, realizes communication with the outside world, and realizes support through a foundation platform. Communicating with the outside through a sensor; the computing power is provided by intelligent chips (hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA and the like); the basic platform comprises distributed computing framework, network and other related platform guarantees and supports, and can comprise cloud storage and computing, interconnection and intercommunication networks and the like. For example, sensors and external communications acquire data that is provided to intelligent chips in a distributed computing system provided by the base platform for computation.
(2) Data of
Data at the upper level of the infrastructure is used to represent the data source for the field of artificial intelligence. The data relates to graphs, images, voice and texts, and also relates to the data of the Internet of things of traditional equipment, including service data of the existing system and sensing data such as force, displacement, liquid level, temperature, humidity and the like.
(3) Data processing
Data processing typically includes data training, machine learning, deep learning, searching, reasoning, decision making, and the like.
The machine learning and the deep learning can perform symbolized and formalized intelligent information modeling, extraction, preprocessing, training and the like on data.
Inference means a process of simulating an intelligent human inference mode in a computer or an intelligent system, using formalized information to think about and solve a problem by a machine according to an inference control strategy, and a typical function is searching and matching.
The decision-making refers to a process of making a decision after reasoning intelligent information, and generally provides functions of classification, sequencing, prediction and the like.
(4) General capabilities
After the above-mentioned data processing, further based on the result of the data processing, some general capabilities may be formed, such as algorithms or a general system, e.g. translation, analysis of text, computer vision processing, speech recognition, recognition of images, etc.
(5) Intelligent product and industrial application
The intelligent product and industry application refers to the product and application of an artificial intelligence system in various fields, and is the encapsulation of an artificial intelligence integral solution, the intelligent information decision is commercialized, and the landing application is realized, and the application field mainly comprises: intelligent terminal, intelligent transportation, intelligent medical treatment, autopilot, safe city etc..
The application scenario of the scheme provided by the present application is briefly described below.
Fig. 2 is a schematic diagram of an application scenario of the present application. As shown in fig. 2, a user firstly uses the first device 201 to play a target signal (audio, video, image, music, or the like), and if the user needs to play the target signal played by the first device 201 on the second device 202 (or the user plans to play the target signal on the second device 202, that is, the target signal is not played on the first device 201 at present), the second device 202 can use an appropriate value (for example, volume, brightness, speed, or the like) to play the target signal by using the data processing method provided by the present application, so as to improve user experience.
Exemplary embodiments describe several application scenarios to which fig. 2 applies:
1. the user needs to convert the music played by the mobile phone to the bluetooth headset for playing.
2. The user needs to convert music, video or images played by the mobile phone to a television, a computer or a projection screen for playing.
3. The user needs to convert music, text or images played by the mobile phone to the smart watch for playing.
It should be understood that the example shown in fig. 2 should not impose any limitation on the context in which the embodiments of the present application are applied.
Most of the audio played by the audio playing device comes from different audio operators. Different audio operators have different volume levels when recording audio.
Therefore, for the same audio, under the condition that the playing device sets the same playing volume, there is a very obvious volume difference. And the user executes audio playing operation through the terminal equipment. Inevitably, in the process of playing audio, in order to not disturb others who are in public places, users can adjust the audio played outside the loudspeaker to the audio played inside the earphone. However, the audio playing volume after adjusting the audio playing device becomes larger or smaller, and especially if the audio playing volume becomes larger suddenly, the hearing effect of the user is reduced seriously, and even the user is scared.
In order to solve the above problem, the present application provides a data processing method, which can satisfy the requirement of a user to perform multimedia audio playing on a mobile phone device, for example: music, video, talk, etc. In the process of audio playing, media playback equipment is switched through Bluetooth, one-touch-and-transfer and App screen projection, the volume of the switched playback equipment can be automatically adjusted to the relative volume of mobile phone equipment, perception of the user on audio size difference in switching is reduced, and the switching of the playback equipment is carried out comfortably.
In the embodiment of the present application, only the first device is taken as a mobile phone for example, and it is understood that the first device and the second device may be terminal devices (e.g., mobile phones), or other devices capable of displaying images, displaying characters, playing music or videos, such as wearable devices and Personal Digital Assistants (PDAs). The embodiment of the present application does not limit the first device here.
The second device in this embodiment of the present application may be a device having functions of displaying images, displaying text, playing music or video, such as a headset, a computer, a television, a sound box, a projection screen, or a watch. The second device is not limited in this embodiment.
Illustratively, when the second device is a headset, a stereo or a display screen on a smart car, the first device and the second device may transmit data through an Institute of Electrical and Electronics Engineers (IEEE) 802.11 protocol. When the second device is a projection screen, the first device and the second device may transmit data through a hypertext transfer protocol (HTTP). When the second device is a projection screen, a display screen on an intelligent automobile, a tablet computer or a computer, the first device and the second device can transmit data through WIFI-Direct. The protocol or the manner used by the first device and the second device to transmit data is not limited herein.
According to the embodiment of the application, the data processing method provided by the application can be divided into a plurality of situations according to the content of the data sent from the first device to the second device:
the first method comprises the following steps: the first device sends the second value to the second device.
A data processing method provided in the present application is described below with reference to fig. 2. As shown in fig. 3, a flow of the data processing method provided by the present application may include steps 301 to 311. The individual steps in the method are explained in detail below with reference to fig. 3.
301. The first device obtains the first value and the device type of the second device.
The first device first obtains a first value, which may be a first value used by the first device to record a target signal played by the first device, or a preset value (i.e., a first value) of the target signal to be played by the first device.
It is understood that there is no limitation on whether the target signal is already played on the first device, and the first value may be a value used by the target signal to be played on the first device.
The target signal in the embodiment of the present application may be music, lyrics of music, video, image, text, or the like, and is not limited herein.
The first numerical value in the embodiment of the present application may be a volume value of playing music or video, a brightness value of playing music or video, or a playing speed value of music lyrics, novel or text, which is understood to be in practical application. The first value may also be other values, such as: the first value may also be a scaling value of a video or a picture, and is not limited herein.
In this embodiment of the present application, a manner in which the first device acquires the device type of the second device may be that the second device sends the device type of the second device to the first device, or may be that the device type of the second device is acquired from a third device (a device other than the first device and the second device), and it can be understood that, in practical application, the first device may also acquire the device type of the second device in other manners, for example: the first device obtains the device type of the second device by scanning the second device, which is not limited herein.
The device type of the second device in this embodiment may be a headset, a computer, a television, a sound box, a projection screen (a screen or a display screen in a smart car), a watch, or the like, and for a plurality of second devices, the second devices may be distinguished in the form of a logo or a number, for example: 1.2 or 0001, 0002, etc., the specific manner of the type of device is not limited herein.
Illustratively, when the first device and the second device are connected by a Digital Living Network Alliance (DLNA) or Miracast, a type description field may be preset in the second device, and the device type of the second device is obtained by reading a description file of each second device in the network through the mobile phone device.
For example, if the mobile phone (i.e. the first device) performs the switching of the playing devices in the multimedia playing state, the device type of the second device is obtained, for example, the bluetooth connection may return the type value of the device type of the second device through getbluetooth class. Optionally, it may be further determined whether the device type is recorded in the first mapping table according to the value.
302. The first device determines a weight coefficient of the second device.
In this embodiment of the application, there may be multiple ways for the first device to determine the weight coefficient of the second device, and the following is only an example of a way for the first device to determine the weight coefficient of the second device according to the first mapping table, and it can be understood that the first device may also determine the weight coefficient of the second device according to a preset rule, or the first device determines the weight coefficient of the second device according to an indication of the third device, and the details are not limited herein.
The weighting factor in the embodiment of the present application is equivalent to a weighting ratio of the volume between the first device and the second device at the same hearing comfort level.
Illustratively, the first device stores a first mapping table, which is used for representing the association relationship between the device type and the weight coefficient of the second device.
For ease of understanding, please refer to table 1 for exemplary purposes:
TABLE 1
Identification Type of device Volume weight Luminance weight Velocity weight Scaling weights
1 0000 (projection screen) 0.7 0.8 0.9 1.2
2 0001 (earphone) 1.1 - 0.7 -
3 0002 (Sound box) 1.5 - 0.8 -
4 0003 (watch) 0.8 1.3 0.7 1.5
The device types and the weights in table 1 are only examples, and the specific representation of the device types and the values of the weights are not limited herein.
For example, when the first value is a volume value for playing music or video, the weight type of the second device may be a volume weight. When the first value is a luminance value of the display, the weight coefficient of the second device may be a luminance weight. When the first value is a speed value of lyrics or text of playing music, the weight coefficient of the second device may be a speed weight. When the first value is a scaling value of the video or picture, the weight coefficient of the second device may be a scaling weight. It is to be understood that the weighting factor of the second device is not limited to the above example, and may be other forms of weighting factors, which are not limited herein.
The first mapping table in the embodiment of the application may be preset manually, or may be obtained by a survey method or an experimental method, and of course, the first mapping table may also be obtained according to a habit of pre-training a user to adjust the first mapping table to a second mapping table on the basis of a first value, and an obtaining manner of the first mapping table is not limited here.
303. The first device determines a second value based on the first value and the weighting factor.
The first device obtains a weight coefficient of the second device according to the first mapping table, and determines a second value according to the first value and the weight coefficient. Specifically, the first device may calculate a product of the first numerical value and the weight coefficient to obtain the second numerical value.
For example, when the device type of the second device is 0001, assuming that the user needs to transfer the music (i.e., the target signal) played on the mobile phone (i.e., the first device) to the headset (i.e., the second device) for playing, and the volume value of the music played on the mobile phone is 85 (i.e., the first value is 85), the volume played by the headset (i.e., the second value) is 93.5 (i.e., the volume value is 93.5 when the volume value is 85 times 1.1).
304. The first device sends the second value to the second device. Correspondingly, the second device receives the second value sent by the first device.
And after the first equipment determines the second numerical value, the first equipment sends the second numerical value to the second equipment. After receiving the second value, the second device may play the target signal using the second value.
Illustratively, continuing the above example, after the handset determines that the volume of music played by the headset is 93.5, the handset transmits a volume value of 93.5 to the headset; or the handset sets the earphone to play music with a volume value of 93.5.
Illustratively, when the first value and the second value are volume values, the first device target signal and the second value are transmitted to the second device through bluetooth, DLNA or Miracast protocols, and the second device plays the target signal using the second value. And DLNA transmission searches for available second equipment in the local area network through the root equipment, and HTTP Post transmission is carried out through Body parameters carrying volume and Url. Miracast finds an accessory Miracast device through WiFi-Direct, and transmits data through Real Time Streaming Protocol (RTSP).
305. The second device obtains a third value. This step is optional.
And after the second equipment plays the target signal by using the second numerical value, the second equipment adjusts the second equipment to play the target signal according to the user to obtain a third numerical value.
Optionally, after the second device obtains the third value, the second device plays the target signal using the third value.
306. And the second equipment sends the third numerical value to the first equipment, and correspondingly, the first equipment receives the third numerical value sent by the second equipment. This step is optional.
After obtaining the third value input by the user, the second device may send the third value to the first device, so that the first device updates the first mapping table according to the third value.
It should be understood that the above two ways for the first device to obtain the third value are only examples, and in practical applications, the first device may also obtain the third value in other ways, which is not limited herein.
307. And the first equipment updates the first mapping table according to the third value. This step is optional.
And after the first device obtains the third value, obtaining a new weight coefficient by calculating the ratio of the third value to the first value, and updating the first mapping table by using the new weight coefficient to obtain a second mapping table.
Illustratively, the first value is 50, the initial weighting factor is 1.1, and the second value is 55. When the second device plays the target signal using the second value, the user adjusts 55 to 60, and the second device sends 60 to the first device, which calculates a new weighting factor of 1.2 (i.e., 1.2 obtained by dividing 60 by 50) based on 60. The first device replaces the previous weight coefficient with the new weight coefficient of the second device, i.e. the weight coefficient 1.1 of the second device in the first mapping table is updated to 1.2, so as to obtain the second mapping table (the weight coefficient of the second device in the second mapping table is 1.2).
308. The first device obtains a target feature. This step is optional.
The first device obtains a target feature, wherein the target feature comprises at least one of a first numerical value used by the first device for playing a target signal, an equipment type of the second device, a playing place of the second device, a system time of the second device, and a third numerical value adjusted by a user.
Optionally, the target feature may further include a user image when the user inputs the third value, so that the first device may provide different fourth values to users different from the first device, thereby more accurately determining the usage habits of the different users.
309. And the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model. This step is optional.
And after the user adjusts the third numerical value used by the second device for playing the target signal each time, the second device sends the third numerical value to the first device, so that the first device knows the adjustment amount of the user each time.
The first device trains the model to be trained according to the target characteristics, which is equivalent to recording target characteristics such as an adjustment value (namely a third numerical value), a device type, current time, current location and the like when the user adjusts the second numerical value, and the model to be trained takes the target characteristics as a training sample. And learning the use habits of the users and predicting the weight coefficient which is most consistent with the current user scene. And obtaining a prediction weight model which is subsequently provided and accords with the use habit of the user.
The model to be trained in the embodiment of the application can be a machine learning model such as a LightGBM model, an SVM model and a CNN model or a neural network model.
For example, when the current time is late at night, when the video (i.e., the target signal) on the mobile phone (i.e., the first device) is transferred to the projection screen (i.e., the second device) to be played, the projection screen plays the video with the second value, the user may feel disturbing, and the user may turn down the volume of the video played on the projection screen (i.e., the user adjusts the second value to the third value). After the user adjusts multiple times, the value of the user adjustment and the current time (late night) are recorded and the study is trained. After that, when the user plays the video on the projection screen late at night, the first device may select a volume value suitable for the user according to the previous training, thereby reducing the influence on other people.
The first device can adopt a mode of training the model to be trained, so that the weight coefficient is more in line with the habits of the user, and even in line with the use habits and requirements of different users in different scenes. For the embodiment without the user adjustment value in advance, the second value may be obtained according to the preset first mapping table. If the user inputs the third value for a plurality of times to adjust the second value, the second device may record a plurality of third values and send the third values to the first device, and the first device may update the weight model according to the plurality of third values. That is, the first device may obtain the adjustment value (i.e., the third value) input by the user each time, and update the weighting factor according to the third value and the first value. Namely, the first device obtains the adjustment value of the user as a training sample, learns the use habit of the user, and then provides a new weight coefficient more conforming to the use habit of the user. In addition, when the adjustment value input by the user is recorded, the location, the system time and the user figure of the second device when the adjustment value is input by the user can be recorded. So that the new weight coefficient obtained by training better conforms to the habit of the user and meets different requirements under different scenes.
310. And the first equipment obtains a fourth value according to the first value and the prediction weight model. This step is optional.
The first device may input the first value into the predictive weight model to obtain an updated weight coefficient, and specifically, may calculate a product of the first value and the updated weight coefficient to obtain a fourth value.
It is to be understood that the prediction weight model may also directly output the fourth value, which is not limited herein.
311. The first device sends the fourth value to the second device. This step is optional.
And after obtaining the fourth numerical value according to the prediction weight model, the first equipment sends the fourth numerical value to the second equipment, so that the second equipment uses the fourth numerical value to play the target signal, and the numerical value conforming to the habit of the user can be obtained without user adjustment.
All the steps shown in fig. 3 may be included in the embodiment of the present application or only a part of the steps may be included in the embodiment of the present application, for example, an implementation manner of the embodiment of the present application may include steps 301 to 304. One implementation of the embodiment of the present application may include step 301 to step 305. Another implementation manner of the embodiment of the present application may include step 301 to step 307. Another implementation of the embodiment of the present application may include steps 301 to 310. Another implementation manner of the embodiment of the present application may include steps 301 to 311. The details are not limited herein.
In this embodiment, the first device may determine a second value used for playing the target signal on the second device according to the first value and the weighting factor. The value used by the second equipment for playing the target signal is determined by adopting a weight coefficient method, the volume of the switched second equipment is automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, the second equipment is switched comfortably, and the user experience is improved. Furthermore, the first device may update the first mapping table according to a third value used by the user to adjust the second value, so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the first device can also obtain a fourth numerical value under a specific scene according to a mode of training the model to be trained, so that the requirements under the specific scene are met.
And the second method comprises the following steps: the first device sends the first value and the weight coefficient to the second device.
Referring to fig. 4, another flow of the data processing method provided by the present application may include steps 401 to 411. The individual steps in the method are explained in detail below with reference to fig. 4.
401. The first device obtains a first value.
402. The first device determines a weight coefficient of the second device.
Steps 401 and 402 in this embodiment are similar to steps 301 and 302 in the embodiment shown in fig. 3, and are not repeated here.
403. The first device sends the first value and the weight coefficient to the second device. Correspondingly, the second device receives the first value and the weight coefficient sent by the first device.
After the first device obtains the first value and determines the weight coefficient of the second device, the first device sends the first value and the weight coefficient to the second device.
404. The second device determines a second value based on the first value and the weighting factor.
After the second device obtains the first value and the weight coefficient of the second device, the second device may determine the second value according to the first value and the weight coefficient. Specifically, the product of the first value and the weight coefficient may be calculated to obtain the second value.
Illustratively, when the first value is 85 and the weight factor of the second device is 1.1, then the second value of the second device is 93.5 (i.e., 85 times 1.1 is 93.5).
Optionally, after determining the second value, the second device plays the target signal using the second value.
405. The second device obtains a third value. This step is optional.
406. The second device sends the third value to the first device. Correspondingly, the first device receives the third value sent by the second device. This step is optional.
407. And the first equipment updates the first mapping table according to the third value. This step is optional.
408. The first device obtains a target feature. This step is optional.
409. And the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model. This step is optional.
410. And the first equipment obtains a fourth value according to the first value and the prediction weight model. This step is optional.
411. The first device sends the fourth value to the second device. This step is optional.
Steps 405 to 411 in this embodiment are similar to steps 305 to 311 in the embodiment shown in fig. 3, and are not repeated here.
All the steps shown in fig. 4 may be included in the embodiment of the present application or only a part of the steps may be included in the embodiment of the present application, for example, an implementation manner of the embodiment of the present application may include steps 401 to 404. One implementation of the embodiment of the present application may include steps 401 to 405. Another implementation manner of the embodiment of the present application may include steps 401 to 407. Another implementation of the embodiment of the present application may include steps 401 to 410. Another implementation manner of the embodiment of the present application may include steps 401 to 411. The details are not limited herein.
In the embodiment of the application, the second device may determine a second value used for playing the target signal on the second device according to the first value and the weight coefficient, determine the value used for playing the target signal by the second device by using a method of the weight coefficient, automatically adjust the volume of the switched second device to the relative volume of the first device, reduce perception of a user on audio size difference in switching, perform comfortable switching of the second device, and improve user experience. Furthermore, the first device may update the first mapping table according to a third value used by the user to adjust the second value, so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the first device can also obtain a fourth numerical value under a specific scene according to a mode of training the model to be trained, so that the requirements under the specific scene are met.
And the third is that: the first device sends the first value to the second device.
Referring to fig. 5, another flow of the data processing method provided by the present application may include steps 501 to 508. The individual steps in the method are explained in detail below with reference to fig. 5.
501. The first device sends the first value to the second device. Correspondingly, the second device receives the first value sent by the first device.
The manner in which the first device obtains the first value is similar to the manner in which the first device obtains the first value in step 301 in the embodiment shown in fig. 3, and is not described herein again.
After the first device obtains the first value, the first device sends the first value to the second device.
Illustratively, the first device is a cell phone, the second device is a watch, the target signal is text (e.g., novels, news, or lyrics), and the first value is a play speed value of the text on the cell phone, assuming that the first value is 2 x speed or 120 speed value.
502. The second device determines a weight coefficient of the second device.
The second device may pre-store a first mapping table, where a description of the first mapping table is similar to that of the first mapping table in the embodiment shown in fig. 3, and is not repeated here.
The second device may determine a weight coefficient of the second device according to the first mapping table.
Optionally, the second device determines the weight type of the second device according to the device type of the second device and the first mapping table.
Illustratively, as shown in table 1, continuing the above example, the device type of the second device is 0003, i.e., the second device is a watch, the velocity weight of the second device is 0.7.
503. The second device determines a second value based on the first value and the weighting factor.
After the second device obtains the first value and determines the weight coefficient, the second device may obtain a second value according to the first value and the weight coefficient. Specifically, the second device may calculate a product of the first numerical value and the weight coefficient to obtain a second numerical value.
Illustratively, continuing with the above example, if the first value is a 2 x speed or speed value of 120 and the second device has a weight factor of 0.7, then the second device calculates the second value as either a 1.4 x speed (1.4 x speed by multiplying 2 x speed by 0.7) or 84 (84 by multiplying 120 by 0.7).
Optionally, the second device plays the target signal using the second value.
Illustratively, the watch plays text using a speed value of 1.4 times speed or 84.
504. The second device obtains a third value. This step is optional.
505. And the second equipment updates the first mapping table according to the third value. This step is optional.
Illustratively, continuing the above example, the user adjusts the speed of playing the text to a speed value of 1.2 times the speed or 72 (i.e., the third value is 1.2 or 80), and the second device obtains a new weighting factor of 0.6 by calculating the quotient of the third value and the first value (i.e., 1.2 divided by 2 yields 0.6, and 72 divided by 120 yields 0.6). The second device replaces the previous weight coefficient with the new weight coefficient of the second device, i.e. the weight coefficient 0.7 of the second device in the first mapping table is updated to 0.6, so as to obtain a second mapping table (the weight coefficient of the second device in the second mapping table is 0.6).
506. The second device obtains the target feature. This step is optional.
The second device obtains a target feature, wherein the target feature comprises at least one of a first numerical value used by the first device for playing the target signal, an equipment type of the second device, a playing place of the second device, a system time of the second device, and a third numerical value adjusted by a user.
Optionally, the target feature may further include a user image when the user inputs the third value, so that the second device may provide different fourth values to different users of the second device, thereby more accurately determining the usage habits of the different users.
507. And the second equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model. This step is optional.
And the second equipment records the third value used by the second equipment for playing the target signal after the user adjusts the third value each time. Alternatively, the second device records the system time, the broadcast location, the user figure, and the like when the user adjusts the numerical value.
The second device trains the model to be trained according to the target characteristics, which is equivalent to recording target characteristics such as an adjustment value (namely a third numerical value), a device type, current time, current location and the like when the user adjusts the second numerical value, and the model to be trained takes the target characteristics as a training sample. And learning the use habits of the users and predicting the weight coefficient which is most consistent with the current user scene. And obtaining a prediction weight model which is subsequently provided and accords with the use habit of the user.
The model to be trained in the embodiment of the application can be a machine learning model such as a LightGBM model, an SVM model and a CNN model or a neural network model.
For example, when the current time is late at night, when the video (i.e., the target signal) on the mobile phone (i.e., the first device) is transferred to the projection screen (i.e., the second device) to be played, the projection screen plays the video with the second value, the user may feel disturbing, and the user may turn down the volume of the video played on the projection screen (i.e., the user adjusts the second value to the third value). After the user adjusts multiple times, the second device records the user adjusted values and the current time (late night) and trains the study. After that, when the user plays the video on the projection screen late at night, the second device may select a volume value suitable for the user according to the previous training, thereby reducing the influence on other people.
The second device can adopt a mode of training the model to be trained, so that the weight coefficient is more in line with the habits of the user, and even in line with the use habits and requirements of different users in different scenes. For the embodiment without the user adjustment value in advance, the second value may be obtained according to the preset first mapping table. If the user inputs the third value for a plurality of times to adjust the second value, the second device may record a plurality of third values and update the weight model according to the plurality of third values. That is, the second device may obtain the adjustment value (i.e., the third value) input by the user each time, and update the weighting factor according to the third value and the first value. Namely, the second device obtains the adjustment value of the user as a training sample, learns the use habit of the user, and then provides a new weight coefficient more conforming to the use habit of the user. In addition, when the adjustment value input by the user is recorded, the location, the system time and the user figure of the second device when the adjustment value is input by the user can be recorded. So that the new weight coefficient obtained by training better conforms to the habit of the user and meets different requirements under different scenes.
508. And the second equipment obtains a fourth value according to the first value and the prediction weight model. This step is optional.
The second device may input the first value into the prediction weight model to obtain an updated weight coefficient, and specifically, may calculate a product of the first value and the updated weight coefficient to obtain a fourth value.
It is to be understood that the prediction weight model may also directly output the fourth value, which is not limited herein.
Optionally, the second device plays the target signal using a fourth value.
All the steps shown in fig. 5 may be included in the embodiment of the present application or only a part of the steps may be included in the embodiment of the present application, for example, one implementation manner of the embodiment of the present application may include steps 501 to 503. One implementation of the embodiment of the present application may include step 501 to step 504. Another implementation manner of the embodiment of the present application may include steps 501 to 505. Another implementation manner of the embodiment of the present application may include steps 501 to 507. Another implementation manner of the embodiment of the present application may include steps 501 to 508. The details are not limited herein.
In the embodiment of the application, the second device may determine a second value used for playing the target signal on the second device according to the first value and the weight coefficient, determine the value used for playing the target signal by the second device by using a method of the weight coefficient, automatically adjust the volume of the switched second device to the relative volume of the first device, reduce perception of a user on audio size difference in switching, perform comfortable switching of the second device, and improve user experience. Furthermore, the second device may update the first mapping table according to a third value used by the user to adjust the second value, so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the second device can also obtain a fourth numerical value under a specific scene according to a mode of training the model to be trained, so that the requirements under the specific scene are met.
Referring to fig. 6, another flow of the data processing method provided in the present application is shown. The data processing method provided by the present application is described below with reference to fig. 6.
In the embodiment of the present application, the data processing method shown in fig. 6 is described by taking only the first device as a mobile phone device and taking an audio signal as an example of a target signal.
Firstly, the mobile phone device calculates the relative volume preset table between multiple devices through human factors engineering to assign initial values, and the human factors engineering researches the overall design of a man-machine system, and the initial values can be obtained through various research methods such as: the method of investigation, experiment, graphic model, etc. measures the weight ratio of the sound volume between the devices under the same hearing comfort level, and generates the first mapping table.
When the mobile phone device plays multimedia, the android system can judge the current Audio playing state of the mobile phone device through an interface in the Audio Manager. If the mobile phone device switches the playing device in the multimedia playing state, the device type of the playing device (namely, the second device) is obtained, for example, the Bluetooth connection can return the type value of the corresponding type of the playing device through getBlueToothclass, and then whether the device type is the effective device type related to the invention is judged according to the value. For the connection between the DLNA and the Miracast device, a type description field can be preset in the corresponding device, and the device type is obtained by reading the description file of each device in the network through the mobile phone device. The valid device types will go to the next weight table query, and the invalid device types (i.e. the invalid devices are not recorded in the first mapping table) will play the audio signals with the same volume value as the mobile phone device.
The mapping table calculation module in the mobile phone firstly obtains the current media playing volume (namely a first numerical value) of the mobile phone device, and transmits the current media playing volume and the effective device type obtained through the connection protocol into the mapping table calculation module, obtains a corresponding weight value (namely a weight coefficient) through the device type, and calculates the volume value (namely a second numerical value) of the playback device (namely a second device) by combining the current volume of the mobile phone device.
And next, transmitting the media audio data stream (namely the audio signal) and the calculated volume of the playback equipment to the playback equipment through Bluetooth, DLNA and Miracast protocols to perform media playing according to the calculated volume value. The Bluetooth transmission converts volume and audio data into digital signals, decodes the data stream and sends the data stream to the target equipment. DLNA transmission searches for available equipment in a local area network through root equipment, and HTTP Post transmission is carried out through Body parameters carrying volume and Url. Miracast looks for the attachment Miracast device through WiFi-Direct, and transmits data through RTSP instant streaming protocol.
When the playback device uses the calculated relative sound to play media, if the user is dissatisfied with the current volume and manually adjusts the current volume, the volume adjusted by the user (namely, the third value) is returned to the mobile phone configuration through volume adjustment dotting, so that the mapping weight table value is refreshed by combining the optimal volume perceived by the user (namely, the first mapping table is updated to obtain the second mapping table).
Corresponding to the method provided by the above method embodiment, the embodiment of the present application further provides a corresponding apparatus, which includes a module for executing the above embodiment. The module may be software, hardware, or a combination of software and hardware.
Referring to fig. 7, an embodiment of the first device in the embodiment of the present application may also be an embodiment of a component (e.g., a processor, a chip, or a system-on-a-chip) of the first device.
In one possible implementation, the first device 700 includes:
a transceiver 701, configured to obtain a first value used by a target signal played by a first device and a device type of a second device;
a processing unit 702 for determining a weight coefficient of the second device;
the processing unit 702 is further configured to determine, according to the weight coefficient and the first value, a second value used by the target signal to be played on the second device;
the transceiving unit 701 is further configured to send the second value to the second device, so that the second device plays the target signal using the second value.
Optionally, the processing unit 702 is specifically configured to determine a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Optionally, the transceiver 701 is further configured to receive a third numerical value sent by the second device, where the third numerical value is a numerical value obtained by adjusting the second numerical value;
the processing unit 702 is further configured to update the first mapping table according to the third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, the transceiver 701 is further configured to obtain a target feature, where the target feature includes at least one of a first numerical value, a playing location of the second device, a system time of the second device, and a third numerical value;
the processing unit 702 is further configured to train the model to be trained according to the target feature, so as to obtain a prediction weight model.
Optionally, the processing unit 702 is further configured to obtain a fourth value according to the first value and the prediction weight model;
the transceiving unit 701 is further configured to send a fourth value to the second device.
Optionally, the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this embodiment, operations performed by each unit in the first device are similar to those performed by the first device in the embodiment shown in fig. 3 to 6, and are not described again here.
In this embodiment, the processing unit 702 may determine a second value used for playing the target signal on the second device according to the first value and the weighting factor. The value used by the second equipment for playing the target signal is determined by adopting a weight coefficient method, the volume of the switched second equipment is automatically adjusted to the relative volume of the first equipment, the perception of the user on the audio size difference in the switching process is reduced, the second equipment is switched comfortably, and the user experience is improved. Further, the processing unit 702 may further update the first mapping table according to a third value used by the user to adjust the second value, so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the processing unit 702 may further obtain a fourth numerical value in a specific scene according to a mode of training the model to be trained, so as to meet the requirement in the specific scene.
In another possible implementation manner, the first device 700 includes:
a transceiver 701, configured to obtain a first value used by a target signal played by a first device and a device type of a second device;
a processing unit 702 for determining a weight coefficient of the second device;
the transceiving unit 701 is further configured to send the first value and the weight coefficient to the second device, so that the second device determines a second value of the second device according to the first value and the weight coefficient.
Optionally, the processing unit 702 is specifically configured to determine a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Optionally, the transceiver 701 is further configured to receive a third numerical value sent by the second device, where the third numerical value is a numerical value obtained by adjusting the second numerical value;
the processing unit 702 is further configured to update the first mapping table according to the third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, the transceiver 701 is further configured to obtain a target feature, where the target feature includes at least one of a first numerical value, a playing location of the second device, a system time of the second device, and a third numerical value;
the processing unit 702 is further configured to train the model to be trained according to the target feature, so as to obtain a prediction weight model.
Optionally, the processing unit 702 is further configured to obtain a fourth value according to the first value and the prediction weight model;
the transceiving unit 701 is further configured to send a fourth value to the second device.
Optionally, the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this embodiment, operations performed by each unit in the first device are similar to those performed by the first device in the embodiment shown in fig. 3 to 6, and are not described again here.
In this embodiment, the transceiver unit 701 sends the first value and the weight coefficient to the second device, so that the second device can determine, according to the first value and the weight coefficient, a second value used for playing the target signal on the second device, and automatically adjust the volume of the switched second device to the relative volume of the first device, thereby reducing the perception of the user on the audio size difference during switching, performing comfortable switching of the second device, and improving user experience. Further, the processing unit 702 may further update the first mapping table according to a third value used by the user to adjust the second value, so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the processing unit 702 may further obtain a fourth numerical value in a specific scene according to a mode of training the model to be trained, so as to meet the requirement in the specific scene.
Referring to fig. 8, an embodiment of the second device in the embodiment of the present application may also be an embodiment of a component (e.g., a processor, a chip, or a system of chips) of the second device.
In one possible implementation, the second device 800 includes:
a transceiving unit 801 for receiving a target signal;
the transceiving unit 801 is further configured to receive a second value sent by the first device, where the second value is obtained by the first device according to the first value, a device type of the second device, and a first mapping table, the first value is a value used by the first device to play a target signal, the first mapping table is used to represent an association relationship between the device type of the second device and a weight coefficient, and the weight coefficient is used to obtain the second value according to the first value;
a processing unit 802 for playing the target signal using the second value.
Optionally, the transceiver 801 is further configured to obtain a third numerical value, where the third numerical value is obtained by adjusting the second numerical value;
the transceiving unit 801 is further configured to send a third value to the first device, so that the first device updates the first mapping table to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this embodiment, operations performed by each unit in the second device are similar to those performed by the second device in the embodiment shown in fig. 3 to fig. 6, and are not described again here.
In this embodiment, the processing unit 802 may receive the second value sent by the first device. The second value is a value used by the first device for determining that the second device plays the target signal by adopting a weight coefficient method, the volume of the switched second device is automatically adjusted to the relative volume of the first device, the perception of the user on the audio size difference in the switching process is reduced, the second device is switched comfortably, and the user experience is improved. Further, the transceiver 801 may further obtain an adjustment value of the user and send the adjustment value (i.e., a third value) to the second device, so that the first device updates the first mapping table according to the adjustment value, and the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the processing unit 802 may further receive a fourth value sent by the first device, where the fourth value is obtained by the first device according to a mode of training the model to be trained, so as to meet a requirement in a specific scenario.
In another possible implementation manner, the second device 800 includes:
a transceiving unit 801 for receiving a target signal;
the transceiving unit 801 is further configured to receive a first numerical value and a weight coefficient sent by the first device, where the first numerical value is a numerical value used when the first device plays the target signal;
a processing unit 802, configured to determine a second value according to the first value and the weighting factor;
the processing unit 802 is further configured to play the target signal using the second value.
Optionally, the processing unit 802 is specifically configured to calculate a product of the first numerical value and the weight coefficient to obtain a second numerical value.
Optionally, the transceiver 801 is further configured to obtain a third numerical value, where the third numerical value is obtained by adjusting the second numerical value;
the processing unit 802 is further configured to play the target signal using a third numerical value;
the transceiving unit 801 is further configured to send a third value to the first device, so that the first device updates the weight coefficient of the second device in the first mapping table by using the third value.
Optionally, the transceiver 801 is further configured to obtain a target feature, where the target feature includes at least one of a playing location of the second device, a system time of the second device, and a third value;
the processing unit 802 is further configured to train the model to be trained according to the target feature, so as to obtain a prediction weight model.
Optionally, the processing unit 802 is further configured to obtain a fourth value according to the first value and the prediction weight model;
the processing unit 802 is further configured to play the target signal using the fourth numerical value.
Optionally, the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this embodiment, operations performed by each unit in the second device are similar to those performed by the second device in the embodiment shown in fig. 3 to fig. 6, and are not described again here.
In this embodiment, the processing unit 802 may receive the first value and the weight coefficient sent by the first device. The processing unit 802 may obtain the second value according to the first value and the weight coefficient, automatically adjust the volume of the switched second device to the relative volume of the first device, reduce the perception of the user on the audio size difference during switching, perform comfortable switching of the second device, and improve the user experience. Further, the transceiver 801 may further obtain an adjustment value of the user and send the adjustment value (i.e., a third value) to the second device, so that the first device updates the first mapping table according to the adjustment value, and the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the processing unit 802 may further obtain a fourth value according to a mode of training the model to be trained, so as to meet requirements in a specific scenario.
In another possible implementation manner, the second device 800 includes:
a transceiving unit 801 for receiving a target signal by a second device;
the transceiving unit 801 is further configured to receive a first value from the first device, where the first value is a value used by the first device to play the target signal;
a processing unit 802 for determining a weight coefficient;
the processing unit 802 is further configured to determine a second value according to the first value and the weighting factor;
the processing unit 802 is further configured to play the target signal using the second value.
Optionally, the processing unit 802 is specifically configured to determine the weight coefficient according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
Optionally, the transceiver 801 is further configured to obtain a third numerical value, where the third numerical value is obtained by adjusting the second numerical value;
the processing unit 802 is further configured to play the target signal using the third numerical value.
Optionally, the processing unit 802 is further configured to update the first mapping table by using a third value to obtain a second mapping table, where a weight coefficient of the second device in the second mapping table is different from a weight coefficient of the second device in the first mapping table.
Optionally, the transceiver 801 is further configured to obtain a target feature, where the target feature includes at least one of a playing location of the second device, a system time of the second device, and a third value;
the processing unit 802 is further configured to train the model to be trained according to the target feature, so as to obtain a prediction weight model.
Optionally, the processing unit 802 is further configured to obtain a fourth value according to the first value and the prediction weight model;
the processing unit 802 is further configured to play the target signal using the fourth numerical value.
Optionally, the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or the target signal comprises a video signal, the first value comprises a volume value or a brightness value used for playing the video signal on the first device, and the second value comprises a volume value or a brightness value used for playing the video signal on the second device; or the target signal comprises an image, the first value comprises a brightness value or a scaling of the image used for playing the image on the first device, and the second value comprises a brightness value or a scaling of the image used for playing the image on the second device; or the target signal comprises text, the first value comprises a play speed value used for playing the text on the first device, and the second value comprises a play speed value used for playing the text on the second device.
In this embodiment, operations performed by each unit in the second device are similar to those performed by the second device in the embodiment shown in fig. 3 to fig. 6, and are not described again here.
In this embodiment, the processing unit 802 may receive a first value sent by a first device. The processing unit 802 may obtain the second value according to the first value and the weight coefficient, automatically adjust the volume of the switched second device to the relative volume of the first device, reduce the perception of the user on the audio size difference during switching, perform comfortable switching of the second device, and improve the user experience. Further, the transceiver 801 may further obtain an adjustment value of the user, and the processing unit 802 may update the first mapping table according to the adjustment value (i.e., the third value), so that the weight coefficient of the second device calculated next time is more suitable for the habit of the user. In addition, the processing unit 802 may further obtain a fourth value according to a mode of training the model to be trained, so as to meet requirements in a specific scenario.
Referring to fig. 9, an embodiment of the present application provides a communication device, which may be a first device or a second device, and for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the method portion of the embodiment of the present application are not disclosed. The communication device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the communication device as a mobile phone as an example:
fig. 9 is a block diagram illustrating a partial structure of a cellular phone related to a communication device provided in an embodiment of the present application. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 960, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 9:
RF circuit 910 may be used for receiving and transmitting signals during a message or call. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Optionally, the memory 920 may store a first mapping table.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user's operation on or near the touch panel 931 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of the touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and outputs the audio data to the processor 980 for processing, and then transmits the audio data to, for example, another mobile phone through the RF circuit 910, or outputs the audio data to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows a WiFi module 970, it is understood that it does not belong to the essential components of the handset.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, which may preferably be logically connected to the processor 980 via a power management system, thereby providing management of charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment, the processor 980 included in the terminal device may perform the functions in the embodiments shown in fig. 3 to fig. 6, which are not described herein again.
Referring to fig. 10, a possible schematic diagram of a communication device 1000 according to the foregoing embodiments is provided for an embodiment of the present application, where the communication device may specifically be a first device or a second device in the foregoing embodiments, and the communication device 1000 may include, but is not limited to, a processor 1001, a communication port 1002, a memory 1003, and a bus 1004, and in the embodiment of the present application, the processor 1001 is configured to control an operation of the communication device 1000.
Further, the processor 1001 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, transistor logic, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Alternatively, the memory 1003 may store the first mapping table.
It should be noted that the communication device shown in fig. 10 may be specifically configured to implement the functions of the steps executed by the first device or the second device in the method embodiments corresponding to fig. 3 to fig. 6, and implement the technical effect corresponding to the first device or the second device, and the specific implementation manner of the communication device shown in fig. 10 may refer to the descriptions in each method embodiment corresponding to fig. 3 to fig. 6, and is not described in detail here.
An embodiment of the present application further provides a computer-readable storage medium storing one or more computer-executable instructions, where the computer-executable instructions are executed by a processor, and the processor executes a method according to a possible implementation manner of the communication device in the foregoing embodiment, where the communication device may specifically be the first device or the second device in the foregoing method embodiments corresponding to fig. 3 to fig. 6.
An embodiment of the present application further provides a computer program product storing one or more computers, and when the computer program product is executed by the processor, the processor executes a method that may be implemented by the communication device, where the communication device may specifically be the first device or the second device in the method embodiments corresponding to fig. 3 to fig. 6.
An embodiment of the present application further provides a chip system, where the chip system includes a processor, and is configured to support a communication device to implement functions related to possible implementation manners of the communication device. In one possible design, the system-on-chip may further include a memory, which stores program instructions and data necessary for the communication device. The chip system may be formed by a chip, or may include a chip and other discrete devices, where the communication device may specifically be the first device or the second device in the method embodiments corresponding to fig. 3 to fig. 6.
An embodiment of the present application further provides a network system architecture, where the network system architecture includes the communication device, and the communication device may specifically be the first device and/or the second device in the method embodiments corresponding to fig. 3 to fig. 6.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place; or may be distributed over multiple network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (32)

1. A data processing method, comprising:
the method comprises the steps that first equipment obtains a first numerical value used by a target signal played by the first equipment and the equipment type of second equipment;
the first device determining a weight coefficient of the second device;
the first device determines a second value used by the target signal to be played at the second device according to the weight coefficient and the first value;
the first device sends the second value to the second device to enable the second device to play the target signal by using the second value.
2. The method of claim 1, wherein the first device determines the weight coefficients of the second device, comprising:
the first device determines a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the first equipment receives a third numerical value sent by the second equipment, wherein the third numerical value is obtained after the second numerical value is adjusted;
and the first equipment updates the first mapping table according to the third value to obtain a second mapping table, wherein the weight coefficient of the second equipment in the second mapping table is different from the weight coefficient of the second equipment in the first mapping table.
4. The method of claim 3, wherein after the first device receives the third value sent by the second device, the method further comprises:
the first device acquires a target feature, wherein the target feature comprises at least one of the first numerical value, a playing place of the second device, a system time of the second device and the third numerical value;
and the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
5. The method of claim 4, further comprising:
the first equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model;
the first device sends the fourth value to the second device.
6. The method according to any one of claims 1 to 5,
the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or
The target signal comprises a video signal, the first value comprises a volume value or a brightness value used by the video signal to be played on the first device, and the second value comprises a volume value or a brightness value used by the video signal to be played on the second device; or
The target signal comprises an image, the first value comprises a brightness value or a scale of the image used by the image to play on the first device, the second value comprises a brightness value or a scale of the image used by the image to play on the second device; or
The target signal comprises text, the first value comprises a play speed value used by the text to play on the first device, and the second value comprises a play speed value used by the text to play on the second device.
7. A data processing method, comprising:
the second device receives a target signal;
the second device receives a second numerical value sent by a first device, wherein the second numerical value is obtained by the first device according to a first numerical value, a device type of the second device and a first mapping table, the first numerical value is a numerical value used by the first device for playing the target signal, the first mapping table is used for representing an association relationship between the device type of the second device and a weight coefficient, and the weight coefficient is used for obtaining the second numerical value according to the first numerical value;
the second device plays the target signal using the second value.
8. The method of claim 7, further comprising:
the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted;
and the second device sends the third value to the first device, so that the first device updates the first mapping table to obtain a second mapping table, wherein the weight coefficient of the second device in the second mapping table is different from the weight coefficient of the second device in the first mapping table.
9. The method according to claim 7 or 8,
the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or
The target signal comprises a video signal, the first value comprises a volume value or a brightness value used by the video signal to be played on the first device, and the second value comprises a volume value or a brightness value used by the video signal to be played on the second device; or
The target signal comprises an image, the first value comprises a brightness value or a scale of the image used by the image to play on the first device, the second value comprises a brightness value or a scale of the image used by the image to play on the second device; or
The target signal comprises text, the first value comprises a play speed value used by the text to play on the first device, and the second value comprises a play speed value used by the text to play on the second device.
10. A data processing method, comprising:
the method comprises the steps that first equipment obtains a first numerical value used by a target signal played by the first equipment and the equipment type of second equipment;
the first device determining a weight coefficient of the second device;
the first device sends the first value and the weight coefficient to the second device, so that the second device determines a second value of the second device according to the first value and the weight coefficient.
11. The method of claim 10, wherein the first device determines the weight coefficients of the second device, comprising:
the first device determines a weight coefficient of the second device according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
12. The method according to claim 10 or 11, characterized in that the method further comprises:
the first equipment receives a third numerical value sent by the second equipment, wherein the third numerical value is obtained after the second numerical value is adjusted;
and the first equipment updates the first mapping table according to the third value to obtain a second mapping table, wherein the weight coefficient of the second equipment in the second mapping table is different from the weight coefficient of the second equipment in the first mapping table.
13. The method of claim 12, wherein after the first device receives the third value sent by the second device, the method further comprises:
the first device acquires a target feature, wherein the target feature comprises at least one of the first numerical value, a playing place of the second device, a system time of the second device and the third numerical value;
and the first equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
14. The method of claim 13, further comprising:
the first equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model;
the first device sends the fourth value to the second device.
15. The method according to any one of claims 10 to 14,
the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or
The target signal comprises a video signal, the first value comprises a volume value or a brightness value used by the video signal to be played on the first device, and the second value comprises a volume value or a brightness value used by the video signal to be played on the second device; or
The target signal comprises an image, the first value comprises a brightness value or a scale of the image used by the image to play on the first device, the second value comprises a brightness value or a scale of the image used by the image to play on the second device; or
The target signal comprises text, the first value comprises a play speed value used by the text to play on the first device, and the second value comprises a play speed value used by the text to play on the second device.
16. A data processing method, comprising:
the second device receives a target signal;
the second equipment receives a first numerical value and a weight coefficient which are sent by first equipment, wherein the first numerical value is a numerical value used when the first equipment plays the target signal;
the second device determines a second numerical value according to the first numerical value and the weight coefficient;
the second device plays the target signal using the second value.
17. The method of claim 16, wherein determining, by the second device, a second value based on the first value and a weighting factor comprises:
the second device calculates a product of the first value and the weight coefficient to obtain the second value.
18. The method according to claim 16 or 17, further comprising:
the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted;
the second device plays the target signal by using the third numerical value;
the second device sends the third value to the first device, so that the first device updates the weight coefficient of the second device in the first mapping table by using the third value.
19. The method of claim 18, wherein after the second device obtains the third value, the method further comprises:
the second device acquires a target feature, wherein the target feature comprises at least one of a playing place of the second device, a system time of the second device and the third numerical value;
and the second equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
20. The method of claim 19, further comprising:
the second equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model;
and the second equipment plays the target signal by using the fourth numerical value.
21. The method according to any one of claims 16 to 20,
the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or
The target signal comprises a video signal, the first value comprises a volume value or a brightness value used by the video signal to be played on the first device, and the second value comprises a volume value or a brightness value used by the video signal to be played on the second device; or
The target signal comprises an image, the first value comprises a brightness value or a scale of the image used by the image to play on the first device, the second value comprises a brightness value or a scale of the image used by the image to play on the second device; or
The target signal comprises text, the first value comprises a play speed value used by the text to play on the first device, and the second value comprises a play speed value used by the text to play on the second device.
22. A data processing method, comprising:
the second device receives a target signal;
the second equipment receives a first value from first equipment, wherein the first value is a value used by the first equipment for playing the target signal;
the second device determining a weight coefficient;
the second device determines a second numerical value according to the first numerical value and the weight coefficient;
the second device plays the target signal using the second value.
23. The method of claim 22, wherein the second device determines a weight coefficient comprising:
the second device determines the weight coefficient according to a first mapping table, where the first mapping table is used to represent an association relationship between a device type of the second device and the weight coefficient.
24. The method according to claim 22 or 23, further comprising:
the second equipment acquires a third numerical value, wherein the third numerical value is obtained after the second numerical value is adjusted;
and the second equipment plays the target signal by using the third numerical value.
25. The method of claim 24, further comprising:
and the second device updates the first mapping table by using the third numerical value to obtain a second mapping table, wherein the weight coefficient of the second device in the second mapping table is different from the weight coefficient of the second device in the first mapping table.
26. The method of claim 24 or 25, wherein after the second device obtains the third value, the method further comprises:
the second device acquires a target feature, wherein the target feature comprises at least one of a playing place of the second device, a system time of the second device and the third numerical value;
and the second equipment trains the model to be trained according to the target characteristics to obtain a prediction weight model.
27. The method of claim 26, further comprising:
the second equipment obtains a fourth numerical value according to the first numerical value and the prediction weight model;
and the second equipment plays the target signal by using the fourth numerical value.
28. The method according to any one of claims 22 to 27,
the target signal comprises an audio signal, the first value comprises a volume value used by the audio signal to play on the first device, and the second value comprises a volume value used by the audio signal to play on the second device; or
The target signal comprises a video signal, the first value comprises a volume value or a brightness value used by the video signal to be played on the first device, and the second value comprises a volume value or a brightness value used by the video signal to be played on the second device; or
The target signal comprises an image, the first value comprises a brightness value or a scale of the image used by the image to play on the first device, the second value comprises a brightness value or a scale of the image used by the image to play on the second device; or
The target signal comprises text, the first value comprises a play speed value used by the text to play on the first device, and the second value comprises a play speed value used by the text to play on the second device.
29. A first device, comprising: a processor coupled with a memory for storing a program or instructions that, when executed by the processor, cause the first device to perform the method of any of claims 1-6 or cause the first device to perform the method of any of claims 10-15.
30. A second apparatus, comprising: a processor coupled with a memory for storing a program or instructions that, when executed by the processor, cause the second device to perform the method of any of claims 7 to 9, or cause the first device to perform the method of any of claims 16 to 21, or cause the first device to perform the method of any of claims 22 to 28.
31. A communication system, comprising: the first device of claim 29, and/or the second device of claim 30.
32. A computer-readable medium having stored thereon a computer program or instructions which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 6, or perform the method of any one of claims 7 to 9, or perform the method of any one of claims 10 to 15, or perform the method of any one of claims 16 to 21, or perform the method of any one of claims 22 to 28.
CN202010818113.8A 2020-08-14 2020-08-14 Data processing method and related equipment Pending CN114077412A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010818113.8A CN114077412A (en) 2020-08-14 2020-08-14 Data processing method and related equipment
PCT/CN2021/107582 WO2022033282A1 (en) 2020-08-14 2021-07-21 Data processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010818113.8A CN114077412A (en) 2020-08-14 2020-08-14 Data processing method and related equipment

Publications (1)

Publication Number Publication Date
CN114077412A true CN114077412A (en) 2022-02-22

Family

ID=80247638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010818113.8A Pending CN114077412A (en) 2020-08-14 2020-08-14 Data processing method and related equipment

Country Status (2)

Country Link
CN (1) CN114077412A (en)
WO (1) WO2022033282A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699201B (en) * 2013-12-31 2017-01-11 青岛歌尔声学科技有限公司 Multifunctional power-on circuit and Bluetooth product
CN105096976B (en) * 2015-08-03 2017-11-24 宁波爱维斯工贸有限公司 A kind of audio system for controlling DSP with bluetooth approach by intelligent APP
CN107493380A (en) * 2017-08-18 2017-12-19 广东欧珀移动通信有限公司 volume adjusting method, device, storage medium and terminal device
CN110543289B (en) * 2019-08-02 2021-12-31 华为技术有限公司 Method for controlling volume and electronic equipment

Also Published As

Publication number Publication date
WO2022033282A1 (en) 2022-02-17

Similar Documents

Publication Publication Date Title
WO2020187153A1 (en) Target detection method, model training method, device, apparatus and storage medium
CN110544488B (en) Method and device for separating multi-person voice
KR102360659B1 (en) Machine translation method, apparatus, computer device and storage medium
CN108494947B (en) Image sharing method and mobile terminal
CN106792003B (en) Intelligent advertisement insertion method and device and server
CN108121803B (en) Method and server for determining page layout
CN107659637B (en) Sound effect setting method and device, storage medium and terminal
TW201543239A (en) Method and apparatus for grouping contacts
CN110458655B (en) Shop information recommendation method and mobile terminal
CN108668024B (en) Voice processing method and terminal
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
US10402157B2 (en) Volume processing method and device and storage medium
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN110147742B (en) Key point positioning method, device and terminal
CN111177180A (en) Data query method and device and electronic equipment
CN111292394A (en) Image color matching relationship determination method and device
CN113269279B (en) Multimedia content classification method and related device
CN113326018A (en) Processing method, terminal device and storage medium
CN110399474B (en) Intelligent dialogue method, device, equipment and storage medium
CN107894958B (en) Junk information cleaning method, terminal, server and computer readable storage medium
CN114065168A (en) Information processing method, intelligent terminal and storage medium
CN112464831B (en) Video classification method, training method of video classification model and related equipment
CN114077412A (en) Data processing method and related equipment
CN113127740A (en) Information recommendation method, electronic device and storage medium
CN113836343A (en) Audio recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination