CN109284081B - Audio output method and device and audio equipment - Google Patents

Audio output method and device and audio equipment Download PDF

Info

Publication number
CN109284081B
CN109284081B CN201811102136.8A CN201811102136A CN109284081B CN 109284081 B CN109284081 B CN 109284081B CN 201811102136 A CN201811102136 A CN 201811102136A CN 109284081 B CN109284081 B CN 109284081B
Authority
CN
China
Prior art keywords
audio
target user
audio output
user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811102136.8A
Other languages
Chinese (zh)
Other versions
CN109284081A (en
Inventor
陈海新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811102136.8A priority Critical patent/CN109284081B/en
Publication of CN109284081A publication Critical patent/CN109284081A/en
Application granted granted Critical
Publication of CN109284081B publication Critical patent/CN109284081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an audio output method, an audio output device and audio equipment, which are applied to the audio equipment, wherein the audio equipment comprises an audio output unit, and the method comprises the following steps: the method comprises the steps of obtaining position information of a target user, determining an audio output direction and audio output power according to the position information of the target user, and controlling an audio output unit to output audio data according to the audio output direction and the audio output power. By the method, the audio data can be directionally output, and the user experience is improved.

Description

Audio output method and device and audio equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an audio output method and apparatus, and an audio device.
Background
With the rapid development of computer technology, voice search occupies a great proportion of use in search engines, most of the existing audio equipment can realize a voice search function, convenience is provided for daily life and work of users, the audio equipment can perform semantic recognition on voice instructions sent by the users, and corresponding audio data can be output to the users through semantic recognition results.
Generally, an audio device can only perform semantic analysis on a user's voice and output desired audio data to the user. However, as the demand of people increases, the traditional audio equipment cannot meet the living demand of people.
In the same environment, there may be a plurality of persons, and in this case, if a user uses an audio device, audio data emitted from the audio device may cause interference to others. For example, in a family, dad mom wants to listen to a song while a child is writing, and if the audio device is turned on, it can cause interference to the child.
Disclosure of Invention
The embodiment of the application aims to provide an audio output method and device and audio equipment, and solves the problems that in the prior art, in the process of outputting audio data to a target user, the audio equipment in the same environment can cause interference to other people, cannot meet user requirements and causes poor user experience.
In order to solve the above technical problem, the embodiment of the present application is implemented as follows:
the audio output method provided by the embodiment of the application comprises the following steps:
acquiring position information of a target user;
determining an audio output direction and audio output power according to the position information of the target user;
and controlling the audio output unit to output audio data according to the audio output direction and the audio output power.
Optionally, the location information of the target user includes:
a direction of the target user relative to the audio device, and a distance between the target user and the audio device;
determining an audio output direction and an audio output power according to the position information of the target user includes:
determining an audio output direction according to the direction of the target user relative to the audio equipment;
and determining audio output power according to the distance between the target user and the audio equipment.
Optionally, the audio device further includes a camera, and before obtaining the location information of the target user, the method further includes:
determining a user identification of the target user;
acquiring a first face feature corresponding to the user identifier;
and acquiring a face image matched with the first face feature from the image shot by the camera, and determining a user corresponding to the face image as the target user.
Optionally, the determining the user identifier of the target user includes:
receiving an input directional sounding instruction;
and performing voiceprint recognition on the directional sounding instruction, and determining the user identification of the target user for inputting the directional sounding instruction according to the voiceprint recognition result.
Optionally, the obtaining of the location information of the target user includes:
determining the position of the face of the target user through the image shot by the camera, and determining the direction of the target user relative to the audio equipment according to the position of the face of the target user;
obtaining a distance between the target user and the audio device based on a light pulse ranging mechanism.
Optionally, the obtaining a face image matched with the first face feature from an image captured by the camera includes:
shooting an image through the camera;
extracting second face features in the image, and matching the second face features with the first face features;
and if the second face features are matched with the first face features, acquiring a face image corresponding to the second face features.
Optionally, the determining an audio output direction according to the direction of the target user relative to the audio device includes:
determining a movement track of the target user according to the direction of the target user relative to the audio equipment;
determining the moving track of the audio output unit in the audio equipment according to the moving track of the target user;
determining the audio output direction based on a movement trajectory within the audio device.
Optionally, the target user includes a plurality of users, the audio output unit includes a plurality of audio output units, each target user corresponds to one or more of the audio output units, and determining a movement trajectory of the audio output unit in the audio device according to the movement trajectory of the target user includes:
and respectively determining the movement track of the audio output unit corresponding to each user identifier according to the movement track of each target user.
In a second aspect, an embodiment of the present application provides an audio output apparatus, where the apparatus includes an audio output unit, and the apparatus includes:
the acquisition module is used for acquiring the position information of a target user;
the determining module is used for determining the audio output direction and the audio output power according to the position information of the target user;
and the output module is used for controlling the audio output unit to output audio data according to the audio output direction and the audio output power.
Optionally, the location information of the target user includes:
a direction of the target user relative to the audio device, and a distance between the target user and the audio device;
the determining module includes:
the direction determining unit is used for determining an audio output direction according to the direction of the target user relative to the audio equipment;
and the power determining unit is used for determining the audio output power according to the distance between the target user and the audio equipment.
Optionally, the apparatus further includes a camera, and the apparatus further includes:
the identification determining module is used for determining the user identification of the target user;
the characteristic obtaining module is used for obtaining a first face characteristic corresponding to the user identification;
and the matching module is used for acquiring a face image matched with the first face feature from the image shot by the camera and determining a user corresponding to the face image as the target user.
Optionally, the determining the identification module includes:
the receiving unit is used for receiving an input directional sounding instruction;
and the identification unit is used for carrying out voiceprint identification on the directional sounding instruction and determining the user identification of the target user for inputting the directional sounding instruction according to the voiceprint identification result.
Optionally, the determining module includes:
the direction determining unit is used for determining the position of the face of the target user through the image shot by the camera, and determining the direction of the target user relative to the audio equipment according to the position of the face of the target user;
a distance determining unit for obtaining a distance between the target user and the audio device based on a light pulse ranging mechanism.
Optionally, the matching module includes:
the image acquisition unit is used for shooting an image through the camera;
the extraction unit is used for extracting second face features in the image and matching the second face features with the first face features;
and the image determining unit is used for acquiring a face image corresponding to the second face feature if the second face feature is matched with the first face feature.
Optionally, the output direction determining unit is configured to:
determining a movement track of the target user according to the direction of the target user relative to the audio equipment;
determining the moving track of the audio output unit in the audio equipment according to the moving track of the target user;
determining the audio output direction based on a movement trajectory within the audio device.
Optionally, the target users include a plurality of users, the audio output unit includes a plurality of audio output units, each target user corresponds to one or more of the audio output units, and the output direction determining unit is configured to:
and respectively determining the movement track of the audio output unit corresponding to each user identification according to the movement track of each target user.
In a third aspect, an embodiment of the present application provides an audio device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the audio output method provided in the foregoing embodiment.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the audio output method provided in the foregoing embodiments.
According to the technical scheme provided by the embodiment of the application, the audio output direction and the audio output power are determined according to the position information of the target user by acquiring the position information of the target user, and the audio output unit is controlled to output the audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a flowchart illustrating an embodiment of an audio output method according to the present application;
FIG. 2 is a flow chart of another embodiment of the audio output method of the present application;
FIG. 3 is a schematic diagram of an audio device distance detection method according to the present application;
FIG. 4 is a schematic diagram illustrating a moving track of an audio output unit according to the present application;
FIG. 5 is a schematic structural diagram of an audio output device according to the present application;
fig. 6 is a schematic structural diagram of an audio device according to the present application.
Detailed Description
The embodiment of the application provides an audio output method and device and audio equipment.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
As shown in fig. 1, an execution subject of the method may be an audio device, where the audio device may be, for example, a sound box, a video device with an audio output function, and other devices with an audio output function, and the audio device includes an audio output unit, where the audio output unit may move in the audio device, and the method may implement directional output of audio. The method may specifically comprise the steps of:
in step S102, location information of the target user is acquired.
Wherein the target user may be a user to whom directional output of audio data is desired. The position information may be a distance between the user and the audio device, or coordinate information of the user, or the like, and may also include a direction in which the user is located with respect to the audio device, or the like.
In implementation, with the rapid development of computer technology, voice search occupies a great proportion of use in search engines, most of the existing audio devices can realize a voice search function, convenience is provided for daily life and work of users, the audio devices can perform semantic recognition through voice commands sent by the users, and corresponding audio data are output to the users through semantic recognition results. Generally, an audio device can only perform semantic analysis on a user's voice and output desired audio data to the user. Some audio devices are also provided with a camera component, so that a user can take pictures or videos through the audio devices, but with the increase of the demand of people, the traditional audio devices cannot meet the living demand of people. In the same environment, there may be a plurality of persons, and in this case, when a certain user needs to use the audio device, the audio data emitted from the audio device may cause interference to others. For example, in a family, dad mom wants to listen to a song while a child is working, and if the audio device is turned on, it can be disruptive to the child. In the process of outputting audio data to a target user, the audio equipment causes interference to other people, cannot meet the requirement of directional sounding of the user, and causes poor user experience. Therefore, the embodiment of the present application provides a technical solution capable of solving the above problems, which may specifically include the following:
the audio device may be configured with a mechanism for obtaining user location information, for example, the user location information may be determined by analyzing an image, or the user location information may be determined by image and distance detection, etc. Specifically, for example, the audio device may include components such as a camera and a range finder, when a user (i.e., a target user) needs to use a function of directional sound production in the audio device, the audio device may turn on the camera, at this time, the target user may stand at a specified position at a certain distance from the audio device, the camera may be preset to acquire an image at the specified position, and the user in the image is taken as the target user. After the target user can stand at the designated position, the camera can shoot an image at the designated position, and the user in the image can be used as the target user to which the audio data needs to be directionally output. Then, the target user can move about freely, in the process, the camera can continue to shoot the image of the target user, and the shot image can be analyzed to determine the position information of the target user.
In practical application, except that the position information of the user can be determined through the camera, the position information of the user can be acquired through components such as a laser range finder and an electronic range finder, and the position information can be specifically set according to practical situations, and the embodiment of the application does not limit the position information.
In step S104, an audio output direction and an audio output power are determined according to the position information of the target user.
The audio output unit may be a directional sound generating component movable in the audio device, for example, an ultrasonic horn or the like capable of directionally generating sound.
In an implementation, the position information may include a distance between the target user and the audio device, a direction of the target user relative to the audio device, and the like, where the distance between the target user and the audio device may be specifically 3 meters or 2 meters, and the direction of the target user relative to the audio device may be an eastern direction or a direction determined relative to some specified reference, and the like. After the position information of the target user is obtained in step S102, the audio output direction of the audio output unit may be determined according to the direction of the target user relative to the audio device, and the audio output power of the audio output unit may be determined according to the distance between the user and the audio device. The audio device may set the output power according to a preset audio output power rule, and the output power may be proportional to the distance, that is, the larger the distance is, the larger the output power is, the scaling factor may be set to 1, 3, or 5, and the like. For example, the obtained user location information may be: the user is at a distance of 2 meters east from the audio device, and if the scaling factor between the audio output power and the distance is set to 2, the audio output direction of the audio device may be the east direction and the audio output power may be 4 (i.e., 2 x 2) watts.
In step S106, the audio output unit is controlled to output audio data according to the audio output direction and the audio output power.
The audio data may be audio data stored in the audio device, or audio data acquired by a terminal device or a server, to which the audio device is connected in a wired or wireless manner and in which the audio data is stored, or audio data acquired online after the audio device is connected to the terminal device or the server.
In implementation, the target user may input a name of the audio data (specifically, a song name, a name of a phase sound, and the like), specifically, a voice collecting component such as a microphone may be disposed in the audio device, and the voice collecting component may be in a real-time on state. After the audio device acquires the corresponding audio data name, the corresponding audio data may be acquired, specifically, the audio device may send an acquisition request of the audio data to a certain specified intelligent device (such as a mobile phone, a tablet computer, or a server), the intelligent device may acquire the audio data and send the audio data to the audio device, and the audio device thereby acquires the audio data that needs to be output. After the audio device acquires the audio data, the audio output unit may be controlled to face the audio output direction according to the audio output direction and the audio output power determined in step S104, and the audio data may be output to the target user with the audio output power.
The embodiment of the application provides an audio output method, which includes the steps of obtaining position information of a target user, determining an audio output direction and audio output power according to the position information of the target user, and controlling an audio output unit to output audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
Example two
As shown in fig. 2, an execution subject of the method may be an audio device, where the audio device may be a sound box, a video device with an audio output function, and other devices with an audio output function, and the audio device includes an audio output unit, and the audio output unit may move in the audio device, and the method may implement directional output of audio. The method specifically comprises the following steps:
in step S202, an input directional utterance instruction is received.
The directional sounding instruction can be an instruction for instructing the audio device to output audio by using a directional sounding mode, and the directional sounding instruction can be realized by any voice input by a user and containing one or more keywords, wherein the keywords can be 'on', 'start', 'directional', and the like.
In implementation, the audio device may be provided with a microphone and other components, when a user sends an instruction to the audio device, the microphone collects the instruction sent by the user, the audio device may perform semantic analysis on the instruction, and if it is determined after the analysis that the directional sounding instruction includes a keyword for starting directional sounding, the audio device may determine that the instruction input by the user is the directional sounding instruction, and at this time, the audio device may switch the current working mode to the directional sounding mode. For example, a user may send a voice of "please turn on the directional sound mode" toward a position where the audio device is located, a microphone of the audio device may collect the voice sent by the user, the audio device may perform semantic analysis on the voice, and keywords such as "turn on" and "direct" may be extracted from the voice through the semantic analysis, at this time, the audio device may determine that the voice input by the user is a directional sound emission instruction.
In step S204, voiceprint recognition is performed on the directional sound emission instruction, and the user identifier of the target user for inputting the directional sound emission instruction is determined according to the voiceprint recognition result.
The voiceprint recognition can be to perform signal processing on the directional sounding instruction, then extract the voiceprint features of the directional sounding instruction, and recognize the identity of the user according to the voiceprint features. The user identification may be the user's name, code, etc.
In implementation, the audio device may store voiceprint features of a plurality of different users, and the voiceprint features are respectively provided with corresponding user identifiers. After the audio device receives the directional sounding instruction, the audio device may perform signal processing on the directional sounding instruction, extract voiceprint features in the directional sounding instruction, then search for voiceprint features matched with the extracted voiceprint features from voiceprint features of a plurality of different users stored in the audio device, and obtain a user identifier corresponding to the searched voiceprint features, that is, a user identifier of a target user who inputs the directional sounding instruction.
In addition, if the voiceprint feature matching the extracted voiceprint feature is not stored in the audio device, the voiceprint feature can be stored, and a corresponding user identifier can be allocated or set for the extracted voiceprint feature.
In step S206, a first facial feature corresponding to the user identifier is acquired.
The first facial features can be divided into two types, namely geometric features and characteristic features, for example, the geometric features of the facial features can be set relations among the facial features such as eyes, nose, mouth and the like, such as distance, area, angle and the like.
In implementation, the audio device may store face features of a plurality of users, and set corresponding user identifiers, where the user identifiers, the face features, and the voiceprint features may correspond to one another. As shown in table 1.
TABLE 1
User identification Human face features Voiceprint features
User A Face feature 1 Voiceprint feature 1
User B Face feature 2 Voiceprint feature 2
After the user identifier of the target user is determined through the processing in step S204, the face features corresponding to the user identifier can be found from the face features of the multiple users stored in the audio device through the user identifier. For example, if the user identifier corresponding to the voiceprint feature of the user a is UserA, and the face feature corresponding to the user identifier UserA stored in the audio device is a group of data set W representing face information, the face feature in the data set W may be obtained as the face feature of the target user.
After the facial features of the target user are obtained through the above processing procedure, an object (i.e., the target user) that needs to perform directional sound production may be determined, in practical application, a facial image matched with the first facial feature may be obtained from an image shot by a camera, and a user corresponding to the facial image is determined as the target user, where the specific processing procedure may refer to the following step S208 to step S214.
In step S208, an image is captured by the camera.
The captured image may be an image with one or more faces of different users.
In implementation, there may be one or more cameras equipped in the audio device, the camera may be a camera that can move freely according to actual needs, and in the process of moving the camera, if it is detected that a shot picture contains face information, an image of the shot picture may be shot, where face detection refers to detecting and extracting a face image from an input image, and a cascade classifier is trained to classify each block in the image usually by using haar features and an Adaboost algorithm. If a certain rectangular region passes through the cascade classifier, it is discriminated as a face image. Meanwhile, when the images are shot, the cameras can shoot all the images with the faces of the users in the acquirable range through continuous movement, the images with the faces of the users can also be shot through a plurality of cameras, and the shot images can contain the faces of one or more users.
In step S210, a second face feature in the image is extracted, and the second face feature is matched with the first face feature.
In implementation, the face recognition may be performed on the captured image, and then, the feature extraction may be performed on the recognized face, and then the recognized face is compared with the determined face feature (i.e., the first face feature) of the target user. When only one face exists in the shot image, the extracted face features can be directly compared with the first face features of the determined target user, if the shot image contains a plurality of faces, feature extraction can be carried out on the face images in the image one by one, and the extracted face features are respectively matched with the first face features. For example, there are user a, user B, and user C in the captured image, face features of the three users are extracted respectively, and the extracted face features are characterized by the number sets W, Y and Z respectively, and then can be matched with the characterization data set X of the first face feature of the target user respectively.
In step S212, if the second face feature matches the first face feature, a face image corresponding to the second face feature is obtained.
In implementation, if some face features exist in the shot image and are matched with the first face features of the target user, the face image is obtained. For example, the camera takes multiple images, each image includes faces of different users, if the representation set W of the face features of the user a in one of the images matches the representation set X of the first face feature, it indicates that the target user has been found in the image, and at this time, the face image of the user a may be acquired.
In step S214, the user corresponding to the face image is determined as the target user.
In step S216, the position of the face of the target user is determined from the image captured by the camera, and the direction information of the target user is determined according to the position of the face of the target user.
In implementation, the movement track of the target user can be captured through the movement shooting of the camera, and in the image obtained through the movement shooting, when the face image is matched, the position of the target user is determined. When the position of the target user is determined, the direction information of the target user can be acquired through a camera or other devices capable of measuring distance.
In step S218, the distance between the target user and the audio device is acquired based on the optical pulse ranging mechanism.
The distance measurement mechanism based on the optical pulse may be a Time Of Flight (TOF) method, a structured light method, or the like.
In implementation, taking a time of flight (TOF) method as an example to obtain distance information between a user and an audio output unit, as shown in fig. 3, an audio device may continuously transmit a light pulse (generally invisible light) to a target user through a transmitter, then a detector receives the light pulse reflected from the target user, and calculates a distance between the target user and the audio device by detecting a time of flight (round trip) of the light pulse.
In step S220, the audio output power is determined according to the distance between the target user and the audio device.
In an implementation, the distance between the target user and the audio device (or the audio output unit) may determine the magnitude of the audio output power of the audio output unit, for example, when the distance between the target user and the audio device (or the audio output unit) is short, the output power of the audio output unit may be small, so that resources may be saved, and user requirements may also be met. When the distance between the target user and the audio device (or the audio output unit) is large, the output power of the audio output unit can be increased, so that the sound volume heard by the target user can not be changed due to the change of the distance, therefore, the distance between the target user and the audio device and the audio output power of the audio output unit can be in positive correlation or direct ratio, the specific proportionality coefficient can be adjusted according to a preset rule, meanwhile, the audio output power of the audio output unit can be determined according to the difference of the distance through a preset output power scheme, the specific determination scheme can be different according to the difference of actual application scenes, and the method is not limited by the application.
In addition, the distance between the target user and the audio device can be changed along with the change of the user position, when the target user position is changed, the distance between the target user and the audio device can be acquired, and the audio output power of the audio output unit can be adjusted according to the distance. For example, when the obtained target user location information is: if the scaling factor of the distance to the audio output power is 2 in the east-right direction of the audio apparatus, it may be determined that the audio output power of the audio output unit is 6 watts, and thereafter, if the target user moves in the west-right direction of the audio apparatus and the user advances one meter for one second (the distance between the user and the audio apparatus may be obtained in units of seconds), the distance between the user and the audio apparatus becomes 2 meters, and the audio output power may be adjusted to 4 watts, and so on, and the audio output power of the audio output unit is determined.
In step S222, a movement track of the target user is determined according to the direction of the target user relative to the audio device.
In implementation, when the direction of the target user relative to the audio device is changed, the relative direction between the target user and the audio device at each time point is obtained, and therefore the movement track of the target user is formed. For example, when the target user is 2 meters in the east direction of the audio device, the user moves in the west direction relative to the audio device, and the movement track of the user moves in the west direction at each time point.
In step S224, a movement trajectory of the audio output unit within the audio device is determined according to the movement trajectory of the target user.
In an implementation, after the audio device obtains the movement track of the target user, the movement track of the corresponding audio unit in the audio device may be determined according to the movement track of the target user, for example, when the user moves from east to west in the audio device, the audio unit also moves from east to west in the audio device, and is at the same time and in the same direction as the movement track of the user.
The processing manner of the step S224 may be various, and besides the above processing manner, when there are multiple target users, the processing manner may also be processed in other various manners, specifically, the target users include multiple target users, the audio output unit includes multiple audio output units, and each target user corresponds to one or more audio output units, at this time, the processing of the step S224 may be implemented by: and respectively determining the moving track of the audio output unit corresponding to each user identifier according to the moving track of each target user.
In implementation, the movement tracks of the corresponding audio output units are determined according to the movement track of each target user, as shown in fig. 4, the movement tracks of the audio output units 1, 2 and 3 corresponding to the user a, the user B and the user C are not interfered with each other, and the movement track of each audio output unit is determined only by the corresponding user movement track.
In step S226, an audio output direction is determined based on the movement trajectory within the audio device.
In implementation, the audio output unit may move along a movement trajectory within the audio device, thereby achieving a movement direction synchronized with the target user, and an output direction of the audio.
In step S228, the audio output unit is controlled to output audio data corresponding to the directional sounding instruction according to the audio output direction and the audio output power.
For the specific processing procedure of the above S228, reference may be made to the relevant content of S106 in the above first embodiment, which is not described herein again.
The embodiment of the application provides an audio output method, which includes the steps of obtaining position information of a target user, determining an audio output direction and audio output power according to the position information of the target user, and controlling an audio output unit to output audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
EXAMPLE III
Based on the same idea, the audio output method provided by the embodiment of the present application further provides an audio output device, where the device includes an audio output unit, and the audio output unit can move in the audio device, as shown in fig. 5.
The audio output device includes: an obtaining module 501, a determining module 502, and an outputting module 503, wherein:
an obtaining module 501, configured to obtain location information of a target user;
a determining module 502, configured to determine an audio output direction and an audio output power according to the location information of the target user;
an output module 503, configured to control the audio output unit to output audio data according to the audio output direction and the audio output power.
In this embodiment of the present application, the location information of the target user includes: a direction of the target user relative to the audio device, and a distance between the target user and the audio device;
the determining module 502 includes:
the direction determining unit is used for determining an audio output direction according to the direction of the target user relative to the audio equipment;
and the power determining unit is used for determining the audio output power according to the distance between the target user and the audio equipment.
In this embodiment, the apparatus further includes a camera, and the apparatus further includes:
the identification determining module is used for determining the user identification of the target user;
the characteristic obtaining module is used for obtaining a first face characteristic corresponding to the user identification;
and the matching module is used for acquiring a face image matched with the first face feature from the image shot by the camera and determining a user corresponding to the face image as the target user.
In an embodiment of the present application, the determining an identifier module includes:
the receiving unit is used for receiving an input directional sounding instruction;
and the identification unit is used for carrying out voiceprint identification on the directional sounding instruction and determining the user identification of the target user for inputting the directional sounding instruction according to the voiceprint identification result.
In this embodiment of the present application, the determining module 502 includes:
the direction determining unit is used for determining the position of the face of the target user through the image shot by the camera, and determining the direction of the target user relative to the audio equipment according to the position of the face of the target user;
a distance determining unit for obtaining a distance between the target user and the audio device based on a light pulse ranging mechanism.
In an embodiment of the present application, the matching module includes:
the image acquisition unit is used for shooting images through the camera;
the extraction unit is used for extracting second face features in the image and matching the second face features with the first face features;
and the image determining unit is used for acquiring the face image corresponding to the second face feature if the second face feature is matched with the first face feature.
In an embodiment of the present application, the output direction determining unit is configured to:
determining a movement track of the target user according to the direction of the target user relative to the audio equipment;
determining the movement track of the audio output unit in the audio equipment according to the movement track of the target user;
determining the audio output direction based on a movement trajectory within the audio device.
In this embodiment of the application, the target user includes a plurality of users, the audio output unit includes a plurality of audio output units, each target user corresponds to one or more of the audio output units, and the output direction determining unit is configured to:
and respectively determining the movement track of the audio output unit corresponding to each user identifier according to the movement track of each target user.
The embodiment of the application provides an audio output device, which determines an audio output direction and audio output power according to position information of a target user by acquiring the position information of the target user, and controls an audio output unit to output audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
Example four
Figure 6 is a schematic diagram of the hardware architecture of an audio device implementing various embodiments of the present application,
the audio device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. It will be understood by those skilled in the art that the audio device configuration shown in fig. 6 does not constitute a limitation of the audio device, that the audio device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components, and that the audio device 600 further includes an audio output unit that is movable within the audio device. In the embodiment of the present application, the audio device includes, but is not limited to, a sound box and the like.
The processor 610 is configured to obtain location information of the target user.
The processor 610 is further configured to determine an audio output direction and an audio output power according to the location information of the target user.
The processor 610 is further configured to control the audio output unit to output audio data according to the audio output direction and the audio output power.
Further, the location information of the target user includes a direction of the target user relative to the audio device, and a distance between the target user and the audio device;
the processor 610 is further configured to determine an audio output direction according to a direction of the target user relative to the audio device;
the processor 610 is further configured to determine an audio output power according to a distance between the target user and the audio device
The processor 610 is further configured to determine a user identification of the target user.
In addition, the processor 610 is further configured to obtain a first facial feature corresponding to the user identifier.
In addition, the processor 610 is further configured to acquire a face image matched with the first face feature from the image captured by the camera, and determine a user corresponding to the face image as the target user.
In addition, the processor 610 is further configured to receive an input directional sounding instruction.
In addition, the processor 610 is further configured to perform voiceprint recognition on the directional sound emission instruction, and determine, according to the voiceprint recognition result, a user identifier of a target user to which the directional sound emission instruction is input.
In addition, the processor 610 is further configured to determine a position of the face of the target user according to the image captured by the camera, and determine a direction of the target user relative to the audio device according to the position of the face of the target user.
In addition, the processor 610 is further configured to obtain a distance between the target user and the audio device based on a light pulse ranging mechanism.
In addition, the processor 610 is further configured to capture an image through the camera.
In addition, the processor 610 is further configured to extract a second facial feature in the image, and match the second facial feature with the first facial feature.
In addition, the processor 610 is further configured to determine an audio output unit corresponding to each of the user identifiers.
In addition, the processor 610 is further configured to obtain a face image corresponding to the second face feature if the second face feature matches the first face feature.
In addition, the processor 610 is further configured to determine a movement trajectory of the target user according to a direction of the target user relative to the audio device.
In addition, the processor 610 is further configured to determine the audio output direction based on a movement trajectory within the audio device.
In addition, the processor 610 is further configured to determine, according to the movement trajectory of each target user, a movement trajectory of an audio output unit corresponding to each user identifier.
The embodiment of the application provides an audio device, which determines an audio output direction and audio output power according to position information of a target user by acquiring the position information of the target user, and controls an audio output unit to output audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
It should be understood that, in the embodiment of the present application, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The audio device provides wireless broadband internet access to the user via the network module 602, such as to assist the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the audio apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The audio device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the luminance of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the audio apparatus 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the audio device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not further described herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the audio device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components to implement the input and output functions of the audio device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the audio device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the audio apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the audio apparatus 600 or may be used to transmit data between the audio apparatus 600 and the external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the audio device, connects various parts of the entire audio device using various interfaces and lines, and performs various functions of the audio device and processes data by running or executing software programs and/or modules stored in the memory 609 and calling up data stored in the memory 609, thereby performing overall monitoring of the audio device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The audio device 600 may also include a power supply 611 (e.g., a battery) to supply power to the various components, and preferably, the power supply 611 may be logically coupled to the processor 610 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Preferably, an embodiment of the present invention further provides an audio device, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program is executed by the processor 610 to implement each process of the above-mentioned audio output method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
EXAMPLE five
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned audio output method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the application provides a computer-readable storage medium, which determines an audio output direction and audio output power according to the position information of the target user by acquiring the position information of the target user, and controls an audio output unit to output audio data according to the audio output direction and the audio output power. Therefore, under the condition that multiple users exist, the audio equipment can be positioned to the target user and outputs audio data to the target user without causing interference to other people, meanwhile, the audio equipment can also track the real-time position of the user and adjust the output direction and the output power according to the position information of the target user, the use requirements of the user are met, and the user experience is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (7)

1. The audio output method is applied to an audio device, wherein the audio device comprises a camera and a plurality of audio output units, and the method comprises the following steps:
receiving an input directional sounding instruction;
performing voiceprint recognition on the directional sounding instruction, and determining the user identification of a target user who inputs the directional sounding instruction according to the voiceprint recognition result;
acquiring a first face feature corresponding to the user identifier;
acquiring a face image matched with the first face feature from an image shot by the camera, and determining a user corresponding to the face image as the target user;
acquiring position information of a target user; the location information includes a direction of the target user relative to the audio device;
determining an audio output direction and audio output power according to the position information of the target user;
controlling the audio output unit to output audio data according to the audio output direction and the audio output power;
the determining the audio output direction and the audio output power according to the position information of the target user comprises:
under the condition that a plurality of target users exist, determining the movement track of each target user according to the direction of each target user relative to the audio equipment;
determining the moving track of the audio output unit corresponding to each user identification according to the moving track of each target user;
based on the movement trajectory of each audio output unit, a corresponding audio output direction is determined.
2. The method of claim 1, wherein the location information of the target user further comprises a distance between the target user and the audio device;
determining an audio output direction and an audio output power according to the position information of the target user includes:
and determining audio output power according to the distance between the target user and the audio equipment.
3. The method of claim 2, wherein the obtaining the location information of the target user comprises:
determining the position of the face of the target user through the image shot by the camera, and determining the direction of the target user relative to the audio equipment according to the position of the face of the target user;
obtaining a distance between the target user and the audio device based on a light pulse ranging mechanism.
4. The method according to claim 1, wherein the obtaining of the facial image matching the first facial feature from the image captured by the camera comprises:
shooting an image through the camera;
extracting second face features in the image, and matching the second face features with the first face features;
and if the second face features are matched with the first face features, acquiring a face image corresponding to the second face features.
5. An audio output device, characterized in that, the device includes a camera and a plurality of audio output units, the device includes:
the receiving module is used for receiving an input directional sounding instruction;
the recognition module is used for carrying out voiceprint recognition on the directional sounding instruction and determining the user identification of the target user who inputs the directional sounding instruction according to the voiceprint recognition result;
the characteristic obtaining module is used for obtaining a first face characteristic corresponding to the user identification;
the matching module is used for acquiring a face image matched with the first face feature from the image shot by the camera and determining a user corresponding to the face image as the target user;
the acquisition module is used for acquiring the position information of a target user; the location information includes a direction of the target user relative to an audio device;
the determining module is used for determining the audio output direction and the audio output power according to the position information of the target user;
the output module is used for controlling the audio output unit to output audio data according to the audio output direction and the audio output power;
the determining module is specifically configured to: under the condition that a plurality of target users exist, determining the movement track of each target user according to the direction of each target user relative to the audio equipment; determining the moving track of an audio output unit corresponding to each user identification according to the moving track of each target user; based on the movement trajectory of each audio output unit, a corresponding audio output direction is determined.
6. An audio device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the audio output method of any one of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the audio output method according to any one of claims 1 to 4.
CN201811102136.8A 2018-09-20 2018-09-20 Audio output method and device and audio equipment Active CN109284081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811102136.8A CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811102136.8A CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Publications (2)

Publication Number Publication Date
CN109284081A CN109284081A (en) 2019-01-29
CN109284081B true CN109284081B (en) 2022-06-24

Family

ID=65181752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811102136.8A Active CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Country Status (1)

Country Link
CN (1) CN109284081B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7200702B2 (en) * 2019-01-30 2023-01-10 京セラドキュメントソリューションズ株式会社 image forming device
CN112565973B (en) * 2020-12-21 2023-08-01 Oppo广东移动通信有限公司 Terminal, terminal control method, device and storage medium
CN113055810A (en) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 Sound effect control method, device, system, vehicle and storage medium
CN113050076A (en) * 2021-03-25 2021-06-29 京东方科技集团股份有限公司 Method, device and system for sending directional audio information and electronic equipment
CN113676818B (en) * 2021-08-02 2023-11-10 维沃移动通信有限公司 Playback apparatus, control method and control device thereof, and computer-readable storage medium
CN114615542A (en) * 2022-03-25 2022-06-10 联想(北京)有限公司 Control method, control device and content sharing system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (en) * 2005-10-17 2006-10-25 华为技术有限公司 Method and device for collecting user behavior characteristics
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
CN107170440A (en) * 2017-05-31 2017-09-15 宇龙计算机通信科技(深圳)有限公司 Orient transaudient method, device, mobile terminal and computer-readable recording medium
CN107623776A (en) * 2017-08-24 2018-01-23 维沃移动通信有限公司 A kind of method for controlling volume, system and mobile terminal
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN107992816A (en) * 2017-11-28 2018-05-04 广东小天才科技有限公司 One kind is taken pictures searching method, device and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10154358B2 (en) * 2015-11-18 2018-12-11 Samsung Electronics Co., Ltd. Audio apparatus adaptable to user position
CN105632540A (en) * 2015-12-18 2016-06-01 湖南人文科技学院 Somatosensory detection and sound control based intelligent music play system
CN105549947B (en) * 2015-12-21 2019-03-29 联想(北京)有限公司 A kind of control method and electronic equipment of audio frequency apparatus
CN106325808B (en) * 2016-08-26 2019-05-21 北京小米移动软件有限公司 Audio method of adjustment and device
CN106973160A (en) * 2017-03-27 2017-07-21 广东小天才科技有限公司 A kind of method for secret protection, device and equipment
CN108319440B (en) * 2017-12-21 2021-03-30 维沃移动通信有限公司 Audio output method and mobile terminal
CN108055403A (en) * 2017-12-21 2018-05-18 努比亚技术有限公司 A kind of audio method of adjustment, terminal and computer readable storage medium
CN108509856A (en) * 2018-03-06 2018-09-07 深圳市沃特沃德股份有限公司 Audio regulation method, device and stereo set
CN108491181B (en) * 2018-03-27 2021-04-13 联想(北京)有限公司 Audio output device and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (en) * 2005-10-17 2006-10-25 华为技术有限公司 Method and device for collecting user behavior characteristics
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
CN107170440A (en) * 2017-05-31 2017-09-15 宇龙计算机通信科技(深圳)有限公司 Orient transaudient method, device, mobile terminal and computer-readable recording medium
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN107623776A (en) * 2017-08-24 2018-01-23 维沃移动通信有限公司 A kind of method for controlling volume, system and mobile terminal
CN107992816A (en) * 2017-11-28 2018-05-04 广东小天才科技有限公司 One kind is taken pictures searching method, device and electronic equipment

Also Published As

Publication number Publication date
CN109284081A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109284081B (en) Audio output method and device and audio equipment
CN110740259B (en) Video processing method and electronic equipment
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
CN111223143B (en) Key point detection method and device and computer readable storage medium
CN110557683B (en) Video playing control method and electronic equipment
CN108989672B (en) Shooting method and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN109005336B (en) Image shooting method and terminal equipment
CN109272473B (en) Image processing method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN109618218B (en) Video processing method and mobile terminal
CN109544445B (en) Image processing method and device and mobile terminal
CN108763475B (en) Recording method, recording device and terminal equipment
CN109448069B (en) Template generation method and mobile terminal
CN107728877B (en) Application recommendation method and mobile terminal
CN110544287B (en) Picture allocation processing method and electronic equipment
CN111738100A (en) Mouth shape-based voice recognition method and terminal equipment
CN107809515B (en) Display control method and mobile terminal
CN111405361B (en) Video acquisition method, electronic equipment and computer readable storage medium
CN111401283A (en) Face recognition method and device, electronic equipment and storage medium
CN111276142B (en) Voice wake-up method and electronic equipment
CN111104927B (en) Information acquisition method of target person and electronic equipment
CN109542293B (en) Menu interface setting method and mobile terminal
CN109257543B (en) Shooting mode control method and mobile terminal
CN111416955A (en) Video call method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant