CN113377323A - Audio control method and electronic equipment - Google Patents

Audio control method and electronic equipment Download PDF

Info

Publication number
CN113377323A
CN113377323A CN202110485169.0A CN202110485169A CN113377323A CN 113377323 A CN113377323 A CN 113377323A CN 202110485169 A CN202110485169 A CN 202110485169A CN 113377323 A CN113377323 A CN 113377323A
Authority
CN
China
Prior art keywords
playing
television
electronic device
electronic equipment
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110485169.0A
Other languages
Chinese (zh)
Inventor
丁大钧
陆洋
肖斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110485169.0A priority Critical patent/CN113377323A/en
Publication of CN113377323A publication Critical patent/CN113377323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An audio control method and electronic equipment relate to the technical field of terminals, and can automatically adjust the audio playing effect by combining the user characteristics of an object, thereby providing more intelligent and customized audio playing experience for users. The method comprises the following steps: the method comprises the steps that the electronic equipment detects a use object of the electronic equipment in real time, wherein the use object refers to one or more users having an interactive relation with the electronic equipment; and, the electronic device may acquire target data of the usage object, the target data including characteristic information of the usage object; furthermore, the electronic device can set playing parameters for playing the audio according to the target data of the using object, and play the audio according to the playing parameters.

Description

Audio control method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an audio control method and an electronic device.
Background
Currently, large-screen devices such as televisions are installed in many homes or offices. Compared with mobile terminals with smaller screens such as mobile phones and the like, large-screen equipment can provide audio-visual services with higher audio-visual experience. For example, a television may display pictures or pictures in a video using a display screen. Also for example, a television may use speakers to play audio data in music or video.
Unlike mobile terminals such as mobile phones, large-screen devices such as televisions are often used by multiple users. For example, dad, mom, and children at home can play related audio data using the same television. In such a usage scenario, the manner in which the television plays audio data to different users is generally the same, and such an undifferentiated playing manner is boring, so that the user experience is not high.
Disclosure of Invention
The application provides an audio control method and electronic equipment, which can automatically adjust the audio playing effect by combining the user characteristics of an object, thereby providing more intelligent and customized audio playing experience for users.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides an audio control method, including: the electronic equipment can detect a use object of the electronic equipment in real time, wherein the use object refers to one or more users having an interactive relationship with the electronic equipment; also, the electronic device may acquire target data of the subject of use, the target data including characteristic information of the subject of use, such as age, sex, and the like; furthermore, the electronic equipment can set playing parameters for playing audio, such as a sound effect mode and the like, according to the target data of the using object; and playing the audio according to the playing parameters.
That is to say, the electronic device can adjust the playing parameters for playing the audio in real time according to the feature information of the current object in the working process, so that the current audio playing effect can be matched with the features of the object. Therefore, different using objects can obtain the audio playing experience corresponding to the characteristics of the using objects when the electronic equipment is used, and more intelligent and customized audio playing experience is provided for the using objects of the electronic equipment.
In one possible implementation, the electronic device detects a usage object of the electronic device, and includes: the electronic equipment acquires a first image by using a camera; furthermore, the electronic equipment can determine the use object of the electronic equipment by carrying out face recognition on the first image. Of course, the electronic device may detect the object to be used by the electronic device by means of infrared scanning or the like.
In one possible implementation manner, the electronic device determines a use object of the electronic device by performing face recognition on the first image, and includes: the electronic device can detect a face feature in the first image; further, the electronic device may determine a user corresponding to the detected facial feature as a use object of the electronic device.
For example, the use object of the electronic device may be multiple; for example, when the facial features of the plurality of users are included in the first image, the electronic device may determine a user with the highest priority among the plurality of users as a usage object of the electronic device.
Alternatively, when the usage object of the electronic device includes a plurality of users, the feature information of the usage object may be feature information common to the plurality of users. Therefore, the electronic equipment can set the playing parameters when playing the audio based on the common characteristic information of a plurality of users, so that a plurality of using objects of the electronic equipment can obtain better audio playing experience.
In a possible implementation manner, the characteristic information of the usage object may include at least one of a sex, an age, or a height of the usage object; the electronic equipment sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise: when the characteristic information of the using object is first characteristic information, the electronic equipment can set a sound effect mode in the playing parameters to be a first sound effect mode; when the feature information of the user object is the second feature information, the electronic device may set the sound effect mode in the playing parameters to the second sound effect mode. That is, different users can obtain an audio effect mode corresponding to their own characteristics when using the electronic device.
In a possible implementation manner, the target data may further include a play record of the usage object on another device; the electronic equipment sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise: and the electronic equipment sets a sound effect mode in the playing parameters according to the playing record of the using object on other equipment. That is, the electronic device may set the playback parameters of the audio in conjunction with using the feature information of the object, using the playback records of the object on the other device.
In a possible implementation manner, the target data may further include position data of the usage object; the electronic equipment sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise: the electronic equipment sets sound effect parameters in the playing parameters according to the position data of the using object. That is, the electronic device may set the playback parameters of the audio in conjunction with at least one of the feature information of the usage object, the position data, and the playback record of the usage object on the other device.
For example, the sound effect parameters may include a volume of a left channel and a volume of a right channel; wherein, the electronic equipment sets the sound effect parameter in the playing parameter according to the position data of the using object, and the method comprises the following steps: when the position data of the usage object indicates that the usage object is located on the left side of the electronic device, the electronic device sets the volume of the left channel to be smaller than the volume of the right channel; when the position data of the using object indicates that the using object is positioned at the right side of the electronic equipment, the electronic equipment sets the volume of the left channel to be larger than that of the right channel, so that the sound effect of stereo sound is improved when the subsequent electronic equipment plays audio.
In a possible implementation manner, the target data may further include information of a room where the electronic device is located; the electronic equipment sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise: the electronic equipment sets sound effect parameters in the playing parameters according to the position data and the room information of the using object so as to improve the sound effect of stereo sound when the electronic equipment plays audio.
In one possible implementation, after the electronic device sets the playing parameters for playing the audio according to the target data of the usage object, the method includes: the electronic equipment stores the corresponding relation between the use object and the playing parameter; after the electronic equipment detects the same using object, the playing parameters corresponding to the using object can be directly used for playing the audio, and the playing parameters of the audio do not need to be set again by acquiring target data such as characteristic information of the using object.
In one possible implementation manner, before the electronic device detects the usage object of the electronic device, the method further includes: the electronic device may prompt the user to enter characteristic information when first used. Therefore, when the electronic equipment detects that the current using object is the user with the input feature information, the feature information input by the user can be directly obtained without carrying the feature information for identifying the using object through algorithms such as face identification and the like.
In a second aspect, the present application provides an audio control method, comprising: the method comprises the steps that the electronic equipment detects a use object of the electronic equipment, wherein the use object refers to one or more users having an interactive relation with the electronic equipment; the electronic equipment acquires target data of the using object, wherein the target data comprises a play record of the using object on other equipment; the electronic equipment sets playing parameters for playing audio according to the target data of the using object; and the electronic equipment plays the audio according to the playing parameters.
It can be seen that, different from the first aspect, the electronic device may directly adjust, according to the play record of the current user object on the other device, the play parameter, such as the sound effect mode, for playing the audio in real time, so that the current audio play effect can be matched with the feature of the user object, thereby providing a more intelligent and customized audio play experience for the user object of the electronic device.
In a possible implementation, the target data further includes characteristic information of the subject, the characteristic information including at least one of sex, age, or height of the subject. At this time, the electronic device may set the above-mentioned play parameter according to the play record of the usage object on the other device and the feature information of the usage object.
In a possible implementation, the target data further includes position data of the usage object. At this time, the electronic device may set the above-mentioned play parameter according to a play record of the usage object on the other device, the feature information of the usage object, and the position data of the usage object. For example, the electronic device may set an audio effect mode among the above-described playback parameters according to a playback record of the usage object on another device and the feature information of the usage object, and the electronic device may set an audio effect parameter among the above-described playback parameters according to the position data of the usage object.
In a third aspect, the present application provides an audio control method, including: the method comprises the steps that the electronic equipment detects a use object of the electronic equipment, wherein the use object refers to one or more users having an interactive relation with the electronic equipment; the electronic equipment acquires target data of the using object, wherein the target data comprises position data of the using object; the electronic equipment sets playing parameters for playing audio according to the target data of the using object; and the electronic equipment plays audio according to the playing parameters.
It can be seen that, different from the first aspect and the second aspect, the electronic device may adjust, in real time, a playing parameter, such as a sound effect parameter, for playing the audio directly according to the position data of the current usage object, so that the current audio playing effect can be matched with the feature of the usage object, thereby providing a more intelligent and customized audio playing experience for the usage object of the electronic device.
In a possible implementation, the target data further includes characteristic information of the subject, the characteristic information including at least one of sex, age, or height of the subject. At this time, the electronic device may set a sound effect parameter in the playback parameters according to the position data of the usage object, and the electronic device may set a sound effect mode in the playback parameters according to the feature information of the usage object.
In a possible implementation manner, the target data further includes a play record of the object on other devices. At this time, the electronic device may set the sound effect mode in the playback parameters according to the playback record of the usage object on the other device and the feature information of the usage object, and the electronic device may set the sound effect parameter in the playback parameters according to the position data of the usage object.
In a fourth aspect, the present application provides an electronic device comprising: the system comprises a memory, a camera, a display screen and one or more processors; the memory, the camera and the display screen are coupled with the processor. Wherein the memory is to store computer program code, the computer program code comprising computer instructions; the processor is operable to execute the one or more computer instructions stored by the memory when the electronic device is operating to cause the electronic device to perform the audio control method as described in any of the first to third aspects above.
In a fifth aspect, the present application provides a computer storage medium comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the audio control method of any of the first to third aspects.
In a sixth aspect, the present application provides a computer program product for causing an electronic device to perform the audio control method according to any one of the first to third aspects when the computer program product is run on the electronic device.
It is to be understood that the electronic device according to the fourth aspect, the computer storage medium according to the fifth aspect, and the computer program product according to the sixth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
Fig. 1 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic view illustrating an application scenario of an audio control method according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of an audio control method according to an embodiment of the present application;
fig. 4 is a schematic view of an application scenario of an audio control method according to an embodiment of the present application;
fig. 5 is a schematic view illustrating an application scenario of an audio control method according to an embodiment of the present application;
fig. 6 is a schematic view of an application scenario of an audio control method according to an embodiment of the present application;
fig. 7 is a schematic view six of an application scenario of an audio control method according to an embodiment of the present application;
fig. 8 is a first flowchart illustrating an audio control method according to an embodiment of the present application;
fig. 9 is a flowchart illustrating a second audio control method according to an embodiment of the present application;
fig. 10 is a schematic view seven of an application scenario of an audio control method according to an embodiment of the present application;
fig. 11 is an application scenario diagram eight of an audio control method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
For example, the audio control method provided in the embodiment of the present application may be applied to an electronic device having an audio playing function, such as a television (also referred to as a smart screen, a projector, or a large screen device), a vehicle-mounted device (also referred to as a car machine), a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), or a virtual reality device, which is not limited in this embodiment of the present application.
Taking a television as an example of the above electronic device, referring to fig. 1, a schematic structural diagram of a television provided in an embodiment of the present application is shown.
As shown in fig. 1, the television may include: a processor 110, an internal memory 121, an antenna, a wireless communication module 160, an audio module 170, a speaker 170A, buttons 190, an indicator 191, a display 192, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an AP, modem processor, GPU, ISP, controller, memory, video codec, DSP, baseband processor, and/or NPU, among others. The different processing units may be separate devices or may be integrated into one or more processors. A memory may also be provided in processor 110 for storing instructions and data.
The wireless communication function of the television can be realized by the antenna and the wireless communication module 160, and the like. The wireless communication module 160 may provide a solution for wireless communication applied on a television, including WLAN (such as Wi-Fi network), BT, GNSS, FM, NFC, IR, and the like.
The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via an antenna, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. Wireless communication module 160 may also receive signals to be transmitted from processor 110, frequency modulate them, amplify them, and convert them into electromagnetic waves via an antenna for radiation. In some embodiments, the antenna of the television is coupled to the wireless communication module 160 so that the television can communicate with the network and other devices through wireless communication techniques. For example, in the embodiment of the present application, the television may communicate with the relay device through the wireless communication module 160, such as receiving the unified identity from the relay device. Of course, the television may also communicate with other terminals, such as the core terminal described above, through the wireless communication module 160.
The television implements display functions via the GPU, display screen 192, and application processor, etc. The GPU is a microprocessor for image processing, coupled to a display screen 192 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 192 is used to display images, video, and the like. For example, in an embodiment of the present application, the television may display advertisements via the display screen 192. The display screen 192 includes a display panel. The display panel can adopt LCD, OLED, AMOLED, FLED, Miniled, MicroLed, Micro-oLed, QLED and the like.
Video codecs are used to compress or decompress digital video. A television may support one or more video codecs. In this way, a television can play or record video in a variety of encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and the like.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the television and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area can store data (such as audio data and the like) created in the television use process and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a UFS, or the like.
The television may implement audio functions via audio module 170, speaker 170A, and application processor, among other things. Such as the playing of advertising audio, etc. The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The television may receive key inputs that generate key signal inputs related to user settings and function controls of the television.
The indicator 191 may be an indicator light, and may be used to indicate that the television is in an on state, a standby state, or an off state, etc. For example, the indication lamp is turned off, which can indicate that the television is in the off state; the indicator light is green or blue, and can indicate that the television is in a power-on state; the indicator light is red, and can indicate that the television is in a standby state.
In some embodiments, as shown in fig. 1, the television may further include: an external memory interface 320, a USB interface 330, a power management module 340, a speaker interface 170B, a microphone 170C, a sensor module 380, and 1-N cameras 193(N is an integer greater than 1), and so on. The sensor module 380 may include a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like. In other embodiments, the television may not include a camera, i.e., the camera 193 is not disposed in the television. The television may be externally connected to the camera 193 through an interface, such as a USB interface 330. The external camera 193 may be secured to the television by an external fastener, such as a clip-on camera mount. For example, the external camera 193 may be fixed to the edge, e.g., the upper edge, of the display screen 192 of the television by external fasteners.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the television. In other embodiments, the television may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In addition, the television may be connected to a network box, such as a set-top box, via an interface, such that the television may interact with the set-top box. For example, a television may access a current Wi-Fi network through a set-top box; for another example, the television may obtain a positioning result for indoor positioning of the user from the set-top box.
In some embodiments, the television may be equipped with a remote control. The remote controller is used for controlling the television. The remote controller may include: a plurality of keys, such as a power key, a volume key, and other plurality of selection keys. The keys on the remote controller can be mechanical keys or touch keys. The remote control may receive key inputs, generate key signal inputs related to user settings and function controls of the television, and transmit corresponding control signals to the television to control the television. For example, the remote controller may transmit a control signal to the television through an infrared signal or the like. The remote controller can also comprise a battery accommodating cavity for accommodating a battery and supplying power to the remote controller.
In the embodiment of the present application, a user watching a picture played by a television, a user listening to audio played by a television, or a user holding a remote controller may be referred to as a usage object of the television. That is, the usage object of the tv refers to one or more users who have an interactive relationship with the tv. The interactive relationship may refer to a capability of the television to output content such as audio or images to the user object, or may refer to a capability of the user object to input content such as text, audio, gestures, or control commands to the television.
As shown in fig. 2, the tv 201 can obtain the feature information of the object 202 in real time after starting operation. For example, characteristic information such as sex, age, or height of the subject 202 is used. Further, the television 201 can set playing parameters such as sound effect and volume when the audio is currently played according to the feature information of the user object 202. For example, when the age of the user 202 is greater than the threshold, the television 201 may adjust the volume level of the currently played audio to be higher, so that older people who are older can listen to the audio played by the television. For another example, when the gender of the user object 202 is female, the television 201 may set the sound effect of the currently played audio to a more beautiful type sound effect mode, so as to enhance the listening experience of the female user using the television.
That is to say, during the working process, the television 201 may adjust the playing parameters for playing the audio in real time according to the feature information of the current object to be used, so that the current audio playing effect can be matched with the features of the object to be used. In this way, different users can obtain the audio playing experience corresponding to their own features when using the tv 201, so as to provide more intelligent and customized audio playing experience for the users of the tv 201.
The following describes an audio control method provided in an embodiment of the present application in detail by taking the television 201 as an example and referring to the drawings.
For example, when the user uses the tv 201 for the first time, the tv 201 may be initially configured after being turned on. For example, initializing configuration may include procedures to access a WiFi network, set up a High Definition Multimedia Interface (HDMI), and so on. In the process of performing the initial configuration, as shown in fig. 3, the television 201 may display a dialog box 301, and prompt the user through the dialog box 301 whether to start the audio control function (i.e., the smart play function) provided in the embodiment of the present application. For example, the user may be prompted in dialog 301 to turn on the specific rights that the smart play function requires to authorize television 201. For example, the authority to open the camera image to capture the image, the authority to read the stored information or automatically modify the playing parameters such as the volume, etc. For another example, the dialog 301 may also show the user a function that the smart play function can specifically implement. If the user is detected to select confirmation button 302 in dialog box 301, television 201 may perform audio control as described in the embodiments below.
In some embodiments, after the television 201 turns on the smart play function, one or more objects used by the television 201 in subsequent operations may be guided to enter their own feature information into the television 201. For example, as shown in fig. 4, the television 201 may guide the user 402 to move into the shooting range of the camera 401 by text, voice, animation, or the like. In turn, the television 201 may invoke the camera 401 to capture an image 403 containing the user 402. As also shown in fig. 4, the tv 201 may display the acquired image 403, and the tv 201 may identify the facial features in the image 403 through an algorithm such as face recognition, so as to determine the feature information such as gender and age of the user 402. For example, the facial features in the image 403 may include feature information of a specific part (also referred to as local features, such as eye features, mouth features, etc.) and feature information of the whole (also referred to as global features, such as face features, etc.). The television 201 may predict the feature information of the user 402, such as gender, age, etc., according to the extracted facial features by using a corresponding prediction algorithm. For example, the age of the user 402 may be determined from the texture information in the ocular feature. Also for example, the gender, etc. of the user 402 may be determined from local features of the face, such as the beard. Alternatively, the television 201 may determine characteristic information such as the height of the user 402 by the scale of the user 402 in the image 403.
Alternatively, the characteristic information such as the sex, age, or height of the user 402 may be manually input to the television 201 by the user 402, and for example, the user 402 may input the characteristic information such as the sex, age, or height of the user to the television 201 using a remote controller.
Alternatively, in addition to the gender, age or height of the user 402, the user 402 may input feature information such as member type (e.g., dad, mom, baby, etc.), interest preference or birthday to the television 201, which is not limited in this embodiment. After the television 201 acquires the feature information of the user 402, the correspondence between the feature information of the user 402 and the facial features (or facial images) of the user 402 may be stored in the memory of the television 201.
For example, after the feature information of the user 402 is acquired by the television 201, the feature information of other users may be continuously acquired and stored according to the above method. When the television 201 acquires feature information of a plurality of users, priority order among the plurality of users may also be set. When the priority of the user is higher, it is described that the more rights the user has to the television 201, the more possibility that the subsequent television 201 sets the playback parameters of the audio with the user as the object of use is, that is, the more possible the user with the higher priority becomes the object of use of the television 201.
For example, the television 201 may automatically set the priority order among the users according to the characteristic information such as the ages of the users. For example, the television 201 may set a priority of users under 12 years of age to be lower than a priority of users over 18 years of age. Alternatively, the user may manually set the priority order among the respective users in the television 201.
Subsequently, when the television 201 determines that the current usage object is the entered user (for example, the user 402) during the operation, the television 201 may set the playing parameter when playing the audio according to the feature information of the user 402, which will be described in detail in the following embodiments.
Or, after the television 201 starts the smart play function, the feature information of the user may not be entered in the initialization configuration stage, but the feature information of the current user object is detected and determined in real time during the working process. For example, the television 201 may acquire an image through the camera 401 when playing a video program, and further identify whether the acquired image includes facial information or the like, so as to determine characteristic information such as gender, age, height, and the like of the current user.
In other embodiments, after the television 201 starts the smart play function, the television 201 may further obtain room information of a room in which the television 201 is located. For example, the room information may include the size, shape, and location of the room where the television 201 is located, among other things. For example, the television 201 may recognize that the size of a room is about 20 square meters, the shape of the room is a square house, the television 201 is located in the center of the sofa opposite to the room, and the like, through images taken by the camera. Subsequently, the television 201 may also set a playing parameter when playing the audio in the working process in combination with the room information, which will be described in detail in the following embodiments. Of course, the television 201 may also detect the room information of the room in which the television 201 is located during the working process without entering the room information in the initialization configuration stage, which is not limited in this embodiment of the application.
In addition, the above embodiment is exemplified by the television 201 in the initial configuration stage, and the user enters the feature information or the room information in the television 201. In other embodiments, the user may also use a mobile terminal such as a mobile phone to control the television 201 to complete the operation related to the initial configuration.
For example, as shown in fig. 5, when the user uses the tv 201 for the first time, the tv 201 may be accessed to an Access Point (AP) 503, so as to join a Wi-Fi (wireless fidelity) network provided by the access point 503, so that the tv 201 may use the Wi-Fi network to communicate with a server on the network side through the access point 503. Similarly, the user may access a mobile terminal such as the handset 502 to a Wi-Fi network provided by the access point 503. At this time, the devices located in the same Wi-Fi network may communicate with each other through the access point 503. Furthermore, the user can perform initialization configuration and subsequent management on the television 201 through the mobile phone 502, for example, the mobile phone 502 can collect feature information of the user and send the collected feature information to the television 201 for storage.
Alternatively, the user may also use the smart home APP installed on the cell phone 502 to discover and manage other devices within the same Wi-Fi network. For example, when the user adds the above-mentioned television 201 to the smart home APP, the cell phone 502 may quickly discover and add a device by establishing a P2P (peer to peer) connection with the television 201. After the television 201 is added to the smart home APP as a new device, a user may perform initialization configuration and subsequent management on the television 201 through the smart home APP.
For example, as shown in fig. 6, the user may enter feature information such as gender, age, or height of the user, and room information such as size and shape of a room in the smart home APP of the mobile phone 502. Further, the mobile phone 502 may transmit the acquired feature information and room information to the television 201. For example, the mobile phone 502 may send the acquired feature information and room information to the television 201 through the access point 503, and the television 201 completes the initial configuration process. Subsequently, the television 201 may also interact with the mobile phone 502 through the access point 503 during the working process, which is not limited in this embodiment of the present application. In addition, above-mentioned smart home APP also can be called wisdom family APP, wisdom life APP etc. and this application embodiment does not do any restriction to this.
The television 201 may be powered off after the initialization configuration process is completed, and at this time, data such as the feature information and the room information of the user, which are recorded in the television 201, may not be lost. Subsequently, when the television 201 is turned on, various audio and video services can be provided to the user directly according to the following steps. Or, after the initial configuration process is completed, the television 201 may not be turned off, and various audio and video services are directly provided to the user according to the following steps. In addition, after the television 201 completes the initial configuration process, setup options may be provided within the television 201. The user may add or delete the feature information of the user by setting the option, and the like, which is not limited in this embodiment of the application.
After the initial configuration of the television 201 is completed during the first use, various audio and video services can be provided to the user.
For example, after the television 201 starts operating, the current usage object of the television 201 may be determined. For example, the television 201 may periodically capture a current image using a camera. Further, the television 201 may determine whether the acquired image includes a face feature, the number of faces, and the like through an algorithm such as face recognition.
For example, as shown in (a) of fig. 7, if a human face feature (e.g., human face feature 702) is included in the image 701, the television 201 may determine that the current television 201 has a use object, that is, the user a corresponding to the human face feature 702. Alternatively, after the facial features 702 are detected in the image 701, the television 201 may further determine whether the corresponding user faces the television 201 through the facial features 702. If the user is facing television 201, television 201 may determine that there is a current usage object for television 201, namely user A, corresponding to facial features 702.
For another example, as shown in fig. 7 (b), there may be a plurality of facial features in the image 703 acquired by the television 201, such as facial features 702 and facial features 704. At this time, the tv 201 may determine that the current tv 201 has two objects of use, namely, the user a corresponding to the facial feature 702 and the user B corresponding to the facial feature 704. Alternatively, when there are multiple facial features in the image 703, the television 201 may further analyze whether the user corresponding to each facial feature is oriented toward the television 201. Taking the facial feature 704 as an example, although the television 201 detects the facial feature 704, the user B corresponding to the facial feature 704 uses a mobile phone, which indicates that the attention of the user B is not on the television 201, and then the television 201 may determine that the current usage object of the television 201 is the user a corresponding to the facial feature 702, and the user B is not the usage object of the television 201 at this time.
Of course, besides determining the current using object of the television 201 by acquiring images to perform face recognition, the person skilled in the art may also set the television 201 to determine the current using object of the television 201 by other means. For example, an infrared sensor may be installed in the television 201, and then the infrared sensor may sense an infrared signal of a human body, so as to determine a current use object of the television 201.
Taking the current usage object of the tv 201 as the user a corresponding to the facial feature 702, as shown in fig. 8, after the tv 201 determines that the current usage object is the user a, the tv 201 may query whether the feature information of the user a is entered in the tv 201 during the initial configuration. For example, the television 201 may use the facial features 702 acquired this time as an index to query whether facial features (or facial images) matching the facial features 702 are stored in the television 201. If a facial feature (or facial image) matching the facial feature 702 is stored, the television 201 may determine, as the feature information of the currently used object (i.e., the user a), the feature information corresponding to the facial feature (or facial image) entered at the time of the initial configuration.
Accordingly, if the facial features (or facial images) matching with the facial features 702 are not stored, it is indicated that the feature information of the user a is not entered at the time of the initial configuration. For example, the facial features 702 collected by the tv 201 may not be facial features of the current family member, but facial features of a guest. The guest typically does not enter his own characteristic information when the television 201 is initially configured. Then, after the television 201 acquires the facial features (e.g., the facial features 702) of the guest while in operation, the television 201 may determine feature information of the currently used object (i.e., the user a) based on the currently acquired facial features 702. For example, the television 201 may determine feature information such as gender and age of the user a according to the facial features 702.
In this way, no matter whether the current using object of the television 201 is entered during the initial configuration, the television 201 can obtain the feature information of the using object in real time during the working process, and the feature information can reflect the characteristic that the current using object is more individualized. The usage requirements of the television 201 may differ for different featured usage objects. For example, the elderly may need the television 201 to play audio at a louder volume when using the television 201. For another example, a child using the television 201 may require that the television 201 be more soft and lovely when playing audio.
In other embodiments, if the user corresponding to the facial feature 702 currently acquired by the television 201 does not enter the feature information of the user during the initial configuration, the television 201 may also not acquire the feature information of the user and set the corresponding playing parameter, but use a default playing parameter to play the audio. That is to say, a user who does not enter feature information cannot use the method for adaptively adjusting the playing parameters provided in the embodiment of the present application, which is not limited in this embodiment of the present application.
In the embodiment of the present application, as shown in fig. 8, when the television 201 acquires the feature information of the currently used object, the playing parameters for playing the audio, such as the sampling rate, the volume, the sound effect mode, or the sound mixing strategy, may be adjusted in real time according to the feature information. Further, the television 201 can play the current audio data using the latest play parameters, so that the current audio play effect can be matched with the characteristics of the usage object. In this way, the effect of playing audio by the television 201 can be adapted according to individual differences of the using objects, so as to provide a more intelligent and customized audio playing experience for the using objects of the television 201.
For example, the television 201 may preset the corresponding relationship between different feature information and different playing parameters. For example, when the age of the object used in the feature information is less than 10 years old, the corresponding sound effect mode may be set as the sound effect mode a with a soft playing effect. For another example, when the age of the subject used in the feature information is greater than 60 years old, the volume level may be set to be increased by 3 volume levels based on the current volume level. Then, the user a is still exemplified by the use object of the television 201, and after the television 201 acquires the feature information of the user a, the playing parameter a corresponding to the feature information of the user a may be determined according to the correspondence. Furthermore, the television 201 can play the audio data using the playing parameter a, so that the user a obtains a customized playing effect.
Alternatively, a trained neural network model may be set in the television 201, the input parameters of the neural network model may be the parameters in the above feature information, and the output parameters of the neural network model are the corresponding playing parameters. Then, after acquiring the feature information of the user a, the television 201 may input the feature information of the user a to the neural network model, so as to obtain the playing parameter a corresponding to the feature information of the user a. Furthermore, the television 201 can play the audio data using the playing parameter a, so that the user a obtains a customized playing effect.
Or, the television 201 may also send the feature information of the user a to a server on the network side, and the server determines the corresponding playing parameter through big data statistics and the like according to the feature information of the user a, and returns the determined playing parameter to the television 201.
In other embodiments, as shown in fig. 9, in addition to setting the playing parameters of the audio based on the feature information of the currently used object, the television 201 may also set the playing parameters when the television 201 plays the audio based on the playing record of the used object on other audio playing devices, so that the audio playing effect of the television 201 can better conform to the preference of the used object.
Illustratively, the user a is still the use object of the television 201, and after the television 201 determines that the use object of the television 201 is the user a, the television 201 may interact with the mobile phone of the user a through the currently accessed Wi-Fi network. For example, the television 201 may send a request message to the mobile phone of the user a, requesting to obtain a play list of an audio player or playing parameters of the audio player in the mobile phone. Further, the mobile phone of the user a may send a play list of an audio player in the mobile phone or a play parameter of the audio player to the television 201 in response to the request message. Furthermore, the television 201 can set the playing parameters of the television 201 in combination with the playlist or playing parameters of the user a when using the mobile phone. For example, if the playlist of user a when using a cell phone is mostly music of a rock song, the television 201 may set the sound effect mode when the television 201 plays audio to the rock mode. For another example, if the user a sets the acquisition rate to be 48KHz in the playing parameters when using the mobile phone, the television 201 may set the acquisition rate when the television 201 plays audio to be 48 KHz.
In some scenarios, if the playing parameter a determined by the tv 201 based on the playing record of the usage object on the other audio playing device is different from the playing parameter B determined by the tv 201 based on the feature information of the usage object, the tv 201 may select one of the playing parameters as the playing parameter of the tv 201. For example, if the priority of the feature information of the usage object is set higher than the priority of the play recording of the usage object on the other audio playing device in advance, the television 201 may determine the play parameter a as a play parameter when audio is subsequently played.
Or, the television 201 may also perform weighted average on the parameters in the playing parameter a and the parameters in the playing parameter B to obtain a new playing parameter C, and further determine the playing parameter C as a playing parameter when the audio is played subsequently, which is not limited in this embodiment of the present application.
In other embodiments, as shown in fig. 9, in addition to setting the playing parameters of the tv 201 when the tv 201 plays audio based on the feature information of the currently used object and/or the playing record of the object on other audio playing devices, the tv 201 may also set the playing parameters of the tv 201 when the audio is played in combination with the real-time position data of the object in the room. Among them, one or more of the feature information of the usage object, the playback record of the usage object on the other audio playback device, and the position data of the usage object may be referred to as target data.
For example, still taking the usage object of the tv 201 as the user a, after the tv 201 determines that the usage object of the tv 201 is the user a, the tv 201 may obtain the indoor positioning result (i.e. the location data of the usage object) of the user a. For example, when the television 201 acquires an image of the user a, not only the facial features of the user a but also information such as the scale and position of the user a in the screen may be acquired. Then, the television 201 can perform computer vision positioning according to the information such as the scale and position of the user a in the picture, and obtain the position data of the user a relative to the television 201.
For another example, as shown in fig. 10, the television 201 may also send a location request to the cell phone 1001 of the user a through the access point 503. Further, the cellular phone 1001 may perform indoor positioning of the cellular phone 1001 based on a wireless signal such as a Wi-Fi signal or a bluetooth signal in response to the positioning request. The mobile phone can send the positioning result of the mobile phone 1001 to the television 201 as the positioning result of the user a, so that the television 201 knows the position data of the user a.
Of course, the television 201 may also use indoor positioning technology such as ultrasonic positioning to position the object used by the television 201, which is not limited in this embodiment. In some embodiments, the television 201 may also perform indoor positioning on the user a simultaneously in multiple ways as described above, and obtain multiple positioning results of the user a. Further, the television 201 may perform correction processing such as fitting and normalization on the plurality of positioning results, and finally determine the position data of the user a in the room.
After the television 201 acquires the position data of the user a in the room, the playing parameters of the television 201 when playing the audio can be set based on the position data. For example, the television 201 may set, based on the position data of the user a in the room, a sound effect parameter for causing the television 201 to have a sound effect of stereo when playing audio, when the television 201 plays audio.
For example, if the user a is located on the left side of the tv 201, which means that the user a is closer to the left side of the tv 201 and farther from the right side of the tv 201, the tv 201 may set the volume of the left channel to be smaller than the volume of the right channel in the sound effect parameters. In this way, when the subsequent tv 201 uses the sound effect parameter to play the audio, the stereo sound effect experienced by the user a at the current position can be improved. Moreover, when the user a moves to a different location, the television 201 can provide the playing effect of the stereo sound for the user a according to the above method. In some embodiments, still taking the user a located on the left side of the television 201 as an example, the television 201 may further set the volume level of the left channel and the volume level of the right channel according to a specific distance between the user a and the television 201, which is not limited in this application embodiment.
Or after the television 201 acquires the position data of the user a in the room, the sound effect parameters may be set in combination with the room information of the room in which the television 201 is located. For example, the television 201 may set the volume level of the left channel and the volume level of the right channel in conjunction with the size of the current room and the location data of user A in the room. For another example, the tv 201 may calculate a reflection route of sound waves when playing audio in combination with the shape of the current room and the position of the tv 201 in the room, and then the tv 201 may set the volume level of the left channel and the volume level of the right channel in combination with the reflection route of sound waves and the position data of the user a in the room, so as to improve the sound effect of stereo sound when the subsequent tv 201 plays audio.
In the above embodiment, the user a is exemplified by the use object of the television 201, and in other embodiments, the use object of the television 201 may include two or more users. When the usage object of the television 201 includes a plurality of users, the television 201 may determine a user with the highest priority among the plurality of users, and further set the playing parameter when the television 201 plays the audio according to parameters such as the feature information of the user with the highest priority.
Illustratively, taking the usage objects of the television 201 as including the user a and the user C as an example, as shown in fig. 11, if the television 201 identifies the user a corresponding to the facial feature 702 and the user C corresponding to the facial feature 1102 in the acquired image 1101, the television 201 may determine that the current usage objects of the television 201 are the user a and the user C. At this time, the television 201 may further determine the user with the highest priority among the user a and the user C.
For example, it may be preset that the closer to the television 201, the higher the priority of the user. Then, the television 201 may determine the user closest to the television 201 among the users a and C as the user with the highest priority at that time. Furthermore, the television 201 may set the playing parameters when the television 201 plays the audio based on the parameters such as the feature information of the user closest to the television 201, the position data of the user, and the history playing records of the user on other devices according to the method described above.
For another example, the priority order among the plurality of entered users may be preset in the initial configuration phase. For example, user C has a higher priority than user C, and user C has a higher priority than user A. Then, when the television 201 determines that the current usage objects of the television 201 are the user a and the user C, the user with the highest priority when the user C is determined to be this can be determined in the priority order described above. Furthermore, the television 201 may set the playing parameters when the television 201 plays the audio according to the above method based on the feature information of the user C and the parameters such as the history playing records of the user C on other devices.
For another example, when the television 201 determines that the current usage objects of the television 201 are the user a and the user C, the television 201 may automatically determine the user with the highest priority at this time according to a certain policy. For example, television 201 may obtain the ages of user a and user C, respectively. If user a is less than 10 years old and user C is greater than 18 years old, television 201 may determine user C as the highest priority user at that time. For another example, the television 201 may also detect whether the user a and the user C hold remote controllers in their hands. Further, the television 201 can determine the user who holds the remote controller among the user a and the user C as the user having the highest priority at this time.
That is, when the usage object of the tv 201 includes a plurality of users, the tv 201 may determine the user with the highest priority among the plurality of users, and further set the playing parameter when the tv 201 plays the audio according to the parameter such as the feature information of the user with the highest priority, so that the main usage object of the tv 201 can obtain a more intelligent and customized audio playing experience.
In other embodiments, when the usage object of the television 201 includes a plurality of users, the television 201 may further extract feature information common to the plurality of users, and further set a playing parameter when the television 201 plays audio based on the feature information common to the plurality of users, so that the plurality of usage objects of the television 201 can obtain a better audio playing experience.
For example, still taking the current usage object of the tv 201 as the user a and the user B for example, the tv 201 may obtain the feature information of the user a and the feature information of the user B, respectively. Further, the television 201 can extract feature information common to the feature information of the user a and the feature information of the user B. For example, users a and B are both female in gender. As another example, user A and user B may each be greater than 180 centimeters in height, etc. Then, the television 201 may set the playing parameters when the television 201 plays the audio according to the above method based on the feature information common to the user a and the user B, so that the audio playing effect of the television 201 may be matched with the features of the user a and the user B at the same time.
Alternatively, when the usage object of the television 201 includes a plurality of users, the television 201 may also extract play preferences common to the plurality of users from the history play records of the plurality of users. Such as music that likes rock music. Furthermore, the television 201 can set the playing parameters when the television 201 plays the audio based on the playing preference common to a plurality of users, so that the plurality of users of the television 201 can all obtain better audio playing experience.
For example, after the television 201 sets and applies the corresponding playing parameter based on the parameter such as the feature information of the object to be used acquired this time, the television 201 may further store the modification record of the playing parameter this time. For example, the television 201 may store the correspondence between the user a and the corresponding playing parameter a. Of course, the television 201 may also save the time for modifying the playing parameters this time, and the like. In this way, when the tv 201 subsequently detects that the user of the tv 201 is the user a again, the tv 201 can directly modify the current playing parameter to the playing parameter a, and does not need to re-confirm the playing parameter once again according to the above method based on the parameters such as the feature information of the user a.
Of course, if the subsequent television 201 detects a new usage object, the television 201 may determine the playing parameter corresponding to the new usage object according to the method in the foregoing embodiment, and then use the determined playing parameter to perform audio playing, which is not described in detail in this embodiment.
In other embodiments, there may be scenarios in which the television 201 does not detect an explicit usage object. For example, the image captured by the television 201 may not have any user or any facial features detected. At this time, the television 201 may continue to play the audio according to the currently set playing parameters. Alternatively, the television 201 may select the playback parameter with the largest number of times to play back the audio according to the stored modification record of the playback parameter. Or, the television 201 may select the playback parameter with the largest number of times of use to play the audio according to the stored modification record of the playback parameter when the usage object is not detected N times continuously, which is not limited in this embodiment of the present application.
It should be noted that, in the embodiment, the example that the television 201 executes the audio control method is illustrated, and it is understood that the audio control method may also be applied to other electronic devices such as a vehicle-mounted device and a tablet computer, and the embodiment of the present application does not limit this.
As shown in fig. 12, an embodiment of the present application discloses an electronic device, which may be the television 201 described above. The electronic device may specifically include: a display screen 1201; one or more processors 1202; a memory 1203; camera 1206 one or more applications (not shown); and one or more computer programs 1204, which may be connected by one or more communication buses 1205. Of course, the electronic device may further include a touch sensor (the touch sensor and the display 1201 may be integrated into a touch screen), a remote controller, a keyboard, and other more components.
Wherein the one or more computer programs 1204 are stored in the memory 1203 and configured to be executed by the one or more processors 1202, the one or more computer programs 1204 comprising instructions that can be used to perform the steps associated with the above embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. An audio control method, comprising:
the method comprises the steps that the electronic equipment detects a use object of the electronic equipment, wherein the use object refers to one or more users having an interactive relation with the electronic equipment;
the electronic equipment acquires target data of the using object, wherein the target data comprises characteristic information of the using object;
the electronic equipment sets playing parameters for playing audio according to the target data of the using object;
and the electronic equipment plays audio according to the playing parameters.
2. The method of claim 1, wherein detecting, by an electronic device, a usage object of the electronic device comprises:
the electronic equipment acquires a first image by using a camera;
the electronic equipment determines a use object of the electronic equipment by carrying out face recognition on the first image.
3. The method of claim 2, wherein the electronic device determines the object of use of the electronic device by performing face recognition on the first image, comprising:
the electronic equipment detects human face features in the first image;
the electronic equipment determines a user corresponding to the detected human face features as a use object of the electronic equipment.
4. The method according to claim 3, wherein when the facial features of a plurality of users are included in the first image, the electronic device determines a user corresponding to the detected facial features as a usage object of the electronic device, and the method comprises:
and the electronic equipment determines the user with the highest priority in the plurality of users as the use object of the electronic equipment.
5. The method according to any one of claims 1 to 3, wherein when the usage object includes a plurality of users, the feature information of the usage object is feature information common to the plurality of users.
6. The method according to any one of claims 1 to 5, wherein the characteristic information of the subject includes at least one of sex, age or height of the subject;
wherein, the electronic device sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise:
when the characteristic information of the using object is first characteristic information, the electronic equipment sets a sound effect mode in the playing parameters to be a first sound effect mode;
and when the characteristic information of the using object is second characteristic information, the electronic equipment sets the sound effect mode in the playing parameters to be a second sound effect mode.
7. The method according to any one of claims 1-6, wherein the target data further comprises a play record of the usage object on another device;
wherein, the electronic device sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise:
and the electronic equipment sets a sound effect mode in the playing parameters according to the playing record of the using object on other equipment.
8. The method according to any one of claims 1-7, wherein the target data further comprises location data of the usage object;
wherein, the electronic device sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise:
and the electronic equipment sets sound effect parameters in the playing parameters according to the position data of the using object.
9. The method of claim 8, wherein the sound effect parameters comprise a volume of a left channel and a volume of a right channel;
wherein, the electronic device sets the sound effect parameters in the playing parameters according to the position data of the using object, and the setting comprises:
when the position data of the usage object indicates that the usage object is located on the left side of the electronic device, the electronic device sets the volume of the left channel to be smaller than the volume of the right channel;
when the position data of the usage object indicates that the usage object is located on the right side of the electronic device, the electronic device sets the volume of the left channel to be greater than the volume of the right channel.
10. The method according to claim 8 or 9, wherein the target data further comprises room information where the electronic device is located;
wherein, the electronic device sets playing parameters for playing audio according to the target data of the using object, and the playing parameters comprise:
and the electronic equipment sets sound effect parameters in the playing parameters according to the position data of the using object and the room information.
11. The method according to any one of claims 1-10, after the electronic device sets the playback parameters for playing back audio according to the target data of the usage object, comprising:
the electronic equipment stores the corresponding relation between the use object and the playing parameter;
and when the electronic equipment detects the same using object, the electronic equipment uses the playing parameter corresponding to the using object to play audio.
12. The method according to any one of claims 1-11, further comprising, before an electronic device detects a usage object of the electronic device:
when the electronic equipment is used for the first time, the electronic equipment prompts a user to enter characteristic information.
13. An electronic device, comprising:
a display screen;
a camera;
one or more processors;
a memory;
wherein the memory has stored therein one or more computer programs, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform an audio control method as claimed in any of claims 1-12.
14. A computer-readable storage medium having instructions stored thereon, which when run on an electronic device, cause the electronic device to perform an audio control method as claimed in any one of claims 1-12.
15. A computer program product, characterized in that it causes an electronic device to execute an audio control method according to any of claims 1-12, when said computer program product is run on said electronic device.
CN202110485169.0A 2021-04-30 2021-04-30 Audio control method and electronic equipment Pending CN113377323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110485169.0A CN113377323A (en) 2021-04-30 2021-04-30 Audio control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110485169.0A CN113377323A (en) 2021-04-30 2021-04-30 Audio control method and electronic equipment

Publications (1)

Publication Number Publication Date
CN113377323A true CN113377323A (en) 2021-09-10

Family

ID=77570411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110485169.0A Pending CN113377323A (en) 2021-04-30 2021-04-30 Audio control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113377323A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116954093A (en) * 2023-07-24 2023-10-27 快住智能科技(苏州)有限公司 Intelligent hotel equipment control method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010147A (en) * 2014-04-29 2014-08-27 京东方科技集团股份有限公司 Method for automatically adjusting volume of audio playing system and audio playing device
CN105208443A (en) * 2015-09-21 2015-12-30 合一网络技术(北京)有限公司 Method, device and system for achieving television volume adjustment
CN105915826A (en) * 2015-12-12 2016-08-31 乐视致新电子科技(天津)有限公司 Method for automatically adjusting television sound effect and device thereof
CN106254938A (en) * 2016-08-29 2016-12-21 北海华源电子有限公司 There is the television set of automatic sound-volume adjusting function
CN108521618A (en) * 2018-03-13 2018-09-11 深圳市沃特沃德股份有限公司 Audio frequency playing method and device
CN108616791A (en) * 2018-04-27 2018-10-02 青岛海信移动通信技术股份有限公司 A kind of audio signal playing method and device
CN108683944A (en) * 2018-05-14 2018-10-19 深圳市零度智控科技有限公司 Volume adjusting method, device and the computer readable storage medium of smart television
CN112380972A (en) * 2020-11-12 2021-02-19 四川长虹电器股份有限公司 Volume adjusting method applied to television scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010147A (en) * 2014-04-29 2014-08-27 京东方科技集团股份有限公司 Method for automatically adjusting volume of audio playing system and audio playing device
CN105208443A (en) * 2015-09-21 2015-12-30 合一网络技术(北京)有限公司 Method, device and system for achieving television volume adjustment
CN105915826A (en) * 2015-12-12 2016-08-31 乐视致新电子科技(天津)有限公司 Method for automatically adjusting television sound effect and device thereof
CN106254938A (en) * 2016-08-29 2016-12-21 北海华源电子有限公司 There is the television set of automatic sound-volume adjusting function
CN108521618A (en) * 2018-03-13 2018-09-11 深圳市沃特沃德股份有限公司 Audio frequency playing method and device
CN108616791A (en) * 2018-04-27 2018-10-02 青岛海信移动通信技术股份有限公司 A kind of audio signal playing method and device
CN108683944A (en) * 2018-05-14 2018-10-19 深圳市零度智控科技有限公司 Volume adjusting method, device and the computer readable storage medium of smart television
CN112380972A (en) * 2020-11-12 2021-02-19 四川长虹电器股份有限公司 Volume adjusting method applied to television scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116954093A (en) * 2023-07-24 2023-10-27 快住智能科技(苏州)有限公司 Intelligent hotel equipment control method and system
CN116954093B (en) * 2023-07-24 2024-02-20 快住智能科技(苏州)有限公司 Intelligent hotel equipment control method and system

Similar Documents

Publication Publication Date Title
US10966044B2 (en) System and method for playing media
CN108231073B (en) Voice control device, system and control method
CN113542839B (en) Screen projection method of electronic equipment and electronic equipment
CN110166820B (en) Audio and video playing method, terminal and device
CN110022487A (en) Volume adjusting method and device
CN106371799A (en) Volume control method and device for multimedia playback equipment
KR102538775B1 (en) Method and apparatus for playing audio, electronic device, and storage medium
WO2020259542A1 (en) Control method for display apparatus, and related device
CN111726678B (en) Method for continuously playing multimedia content between devices
WO2020134560A1 (en) Live broadcast room switching method and apparatus, and terminal, server and storage medium
CN105120301B (en) Method for processing video frequency and device, smart machine
CN104112459B (en) Method and apparatus for playing audio data
WO2022052791A1 (en) Method for playing multimedia stream and electronic device
CN112075086A (en) Method for providing contents and electronic device supporting the same
CN113574525A (en) Media content recommendation method and equipment
JP2023541636A (en) How to switch scenes, terminals and storage media
CN113965715A (en) Equipment cooperative control method and device
CN113921002A (en) Equipment control method and related device
WO2022022743A1 (en) Method for identifying user on public device, and electronic device
CN113377323A (en) Audio control method and electronic equipment
CN110808021A (en) Audio playing method, device, terminal and storage medium
CN115731923A (en) Command word response method, control equipment and device
CN115695860A (en) Method for recommending video clip, electronic device and server
CN114120987A (en) Voice awakening method, electronic equipment and chip system
CN113572798A (en) Device control method, system, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination