CN109284081A - A kind of output method of audio, device and audio frequency apparatus - Google Patents

A kind of output method of audio, device and audio frequency apparatus Download PDF

Info

Publication number
CN109284081A
CN109284081A CN201811102136.8A CN201811102136A CN109284081A CN 109284081 A CN109284081 A CN 109284081A CN 201811102136 A CN201811102136 A CN 201811102136A CN 109284081 A CN109284081 A CN 109284081A
Authority
CN
China
Prior art keywords
audio
target user
frequency apparatus
audio frequency
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811102136.8A
Other languages
Chinese (zh)
Other versions
CN109284081B (en
Inventor
陈海新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811102136.8A priority Critical patent/CN109284081B/en
Publication of CN109284081A publication Critical patent/CN109284081A/en
Application granted granted Critical
Publication of CN109284081B publication Critical patent/CN109284081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the present application discloses the output method, device and audio frequency apparatus of a kind of audio, applied to audio frequency apparatus, it include audio output unit in the audio frequency apparatus, the described method includes: obtaining the location information of target user, according to the location information of the target user, it determines audio output direction and sound output, according to the audio output direction and the sound output, controls the audio output unit output audio data.By this method, the output audio data that can be oriented improves user experience.

Description

A kind of output method of audio, device and audio frequency apparatus
Technical field
This application involves field of computer technology more particularly to a kind of output methods of audio, device and audio frequency apparatus.
Background technique
With the rapid development of computer technology, sound search occupies biggish using specific gravity, mesh in a search engine Sound function of search is able to achieve in preceding audio frequency apparatus mostly, is provided convenience for the daily life of user with work, audio is set It is standby semantics recognition to be carried out by the sound instruction issued to user, it is exported accordingly by the result of semantics recognition to user Audio data.
In general, audio frequency apparatus can only carry out semantic analysis to the sound of user, the audio data needed is exported to user.But It is with the growth of people's demand, traditional audio frequency apparatus is no longer satisfied people's lives demand.
Under same environment, there may be more people, in the case, if user uses audio frequency apparatus, which is set The audio data that preparation goes out may interfere other people.For example, Papa and Mama want to listen first song in one family, and Child is doing the homework, if opening audio frequency apparatus, can interfere to child.
Summary of the invention
The purpose of the embodiment of the present application is to provide the output method, device and audio frequency apparatus of a kind of audio, existing to solve Technology sound intermediate frequency equipment during exporting audio data to target user, will cause and do to other people under same environment It disturbs, is unable to satisfy user demand, cause poor user experience problem.
In order to solve the above technical problems, the embodiment of the present application is achieved in that
A kind of output method of audio provided by the embodiments of the present application, which comprises
Obtain the location information of target user;
According to the location information of the target user, audio output direction and sound output are determined;
According to the audio output direction and the sound output, the audio output unit output audio number is controlled According to.
Optionally, the location information of the target user includes:
The target user relative to the direction of the audio frequency apparatus and the target user and the audio frequency apparatus it Between distance;
The location information according to the target user, determines audio output direction and sound output, comprising:
Direction according to the target user relative to the audio frequency apparatus determines audio output direction;
According to the distance between the target user and the audio frequency apparatus, sound output is determined.
It optionally, further include camera in the audio frequency apparatus, described before the location information for obtaining target user, institute State method further include:
Determine the user identifier of the target user;
Obtain corresponding first face characteristic of the user identifier;
Obtain the facial image to match with first face characteristic from the image that the camera is shot, and by institute It states the corresponding user of facial image and is determined as the target user.
Optionally, the user identifier of the determination target user, comprising:
Receive the orientation sounding instruction of input;
The orientation sounding is instructed and carries out Application on Voiceprint Recognition, the orientation is inputted as a result, determining according to the Application on Voiceprint Recognition The user identifier of sounding instruction target user.
Optionally, the location information for obtaining target user, comprising:
The image shot by the camera determines the position where the face of the target user, according to the mesh The position where the face of user is marked, determines direction of the target user relative to the audio frequency apparatus;
The distance between the target user and the audio frequency apparatus are obtained based on light pulse ranging mechanism.
Optionally, described that the face to match with first face characteristic is obtained from the image that the camera is shot Image, comprising:
Image is shot by the camera;
The second face characteristic in described image is extracted, second face characteristic and first face characteristic are carried out Matching;
If second face characteristic matches with first face characteristic, second face characteristic pair is obtained The facial image answered.
Optionally, the direction according to the target user relative to the audio frequency apparatus, determines audio output direction, Include:
Direction according to the target user relative to the audio frequency apparatus determines the motion track of the target user;
According to the motion track of the target user, movement of the audio output unit in the audio frequency apparatus is determined Track;
Based on the motion track in the audio frequency apparatus, the audio output direction is determined.
Optionally, the target user includes multiple, and the audio output unit includes multiple, each target user Corresponding one or more audio output units, the motion track according to the target user determine that the audio is defeated Motion track of the unit in the audio frequency apparatus out, comprising:
According to the motion track of each target user, determine that audio corresponding with each user identifier is defeated respectively The motion track of unit out.
Second aspect, a kind of output device of audio provided by the embodiments of the present application include audio output in described device Unit, described device include:
Module is obtained, for obtaining the location information of target user;
Determining module determines audio output direction and audio output function for the location information according to the target user Rate;
Output module, for controlling the audio output according to the audio output direction and the sound output Unit exports audio data.
Optionally, the location information of the target user includes:
The target user relative to the direction of the audio frequency apparatus and the target user and the audio frequency apparatus it Between distance;
The determining module, comprising:
Direction-determining unit determines that audio is defeated for the direction according to the target user relative to the audio frequency apparatus Direction out;
Power determining unit, for determining that audio is defeated according to the distance between the target user and the audio frequency apparatus Power out.
It optionally, further include camera, described device in described device further include:
Mark module is determined, for determining the user identifier of the target user;
Characteristic module is obtained, for obtaining corresponding first face characteristic of the user identifier;
Matching module, for obtaining the people to match with first face characteristic from the image that the camera is shot Face image, and the corresponding user of the facial image is determined as the target user.
Optionally, the determining mark module, comprising:
Receiving unit, orientation sounding instruction for receiving input;
Recognition unit carries out Application on Voiceprint Recognition for instructing to the orientation sounding, according to the Application on Voiceprint Recognition as a result, determining Input the user identifier of the orientation sounding instruction target user.
Optionally, the determining module, comprising:
Determine direction unit, the image for shooting by the camera determines the face place of the target user Position determine the target user relative to the audio frequency apparatus according to the position where the face of the target user Direction;
Distance unit is determined, for obtaining between the target user and the audio frequency apparatus based on light pulse ranging mechanism Distance.
Optionally, the matching module, comprising:
Image acquisition unit, for shooting image by the camera;
Extraction unit, for extracting the second face characteristic in described image, by second face characteristic and described the One face characteristic is matched;
It determines elementary area, if matched for second face characteristic and first face characteristic, obtains The corresponding facial image of second face characteristic.
Optionally, the determining outbound course unit, is used for:
Direction according to the target user relative to the audio frequency apparatus determines the motion track of the target user;
According to the motion track of the target user, movement of the audio output unit in the audio frequency apparatus is determined Track;
Based on the motion track in the audio frequency apparatus, the audio output direction is determined.
Optionally, the target user includes multiple, and the audio output unit includes multiple, each target user Corresponding one or more audio output units, the determining outbound course unit are used for:
According to the motion track of each target user, determine that audio corresponding with each user identifier is defeated respectively The motion track of unit out.
The third aspect, the embodiment of the present application provide a kind of audio frequency apparatus, including processor, memory and are stored in described deposit On reservoir and the computer program that can run on the processor, the computer program are realized when being executed by the processor The step of output method of audio provided by the above embodiment.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage medium Computer program is stored in matter, the computer program realizes the output of audio provided by the above embodiment when being executed by processor The step of method.
As can be seen from the technical scheme provided by the above embodiments of the present application, the embodiment of the present application is by obtaining the target user Location information audio output direction and sound output are determined, according to described according to the location information of the target user Audio output direction and the sound output control the audio output unit output audio data.In this way, being deposited in more people In case, audio frequency apparatus can navigate to target user, and audio data is exported to target user, without to it Other people interfere, meanwhile, audio frequency apparatus can also track the real time position of user, and according to the location information tune of target user Whole outbound course and output power meet the use demand of user, improve user experience.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, in the premise of not making the creative labor property Under, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart of the output method embodiment of audio of the application;
Fig. 2 is the flow chart of the output method embodiment of the application another kind audio;
Fig. 3 is a kind of schematic diagram of audio frequency apparatus distance measurement method of the application;
Fig. 4 is a kind of display schematic diagram of audio output unit motion track of the application;
Fig. 5 is a kind of structural schematic diagram of the output device of audio of the application;
Fig. 6 is a kind of structural schematic diagram of audio frequency apparatus of the application.
Specific embodiment
The embodiment of the present application provides the output method, device and audio frequency apparatus of a kind of audio.
In order to make those skilled in the art better understand the technical solutions in the application, below in conjunction with the application reality The attached drawing in example is applied, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described implementation Example is merely a part but not all of the embodiments of the present application.Based on the embodiment in the application, this field is common The application protection all should belong in technical staff's every other embodiment obtained without creative efforts Range.
Embodiment one
As shown in Figure 1, the embodiment of the present application provides a kind of output method of audio, the executing subject of this method can be sound Frequency equipment, the audio frequency apparatus can as speaker, the video equipment with audio output function and it is other have audio output function The equipment etc. of energy, includes audio output unit in the audio frequency apparatus, which can move in the audio frequency apparatus, The orientation output of audio may be implemented in this method.This method can specifically include following steps:
In step s 102, the location information of target user is obtained.
Wherein, target user can be the user for needing that output audio data is oriented to it.Location information can be user The distance between audio frequency apparatus or the coordinate information of user etc. also may include place of the user relative to audio frequency apparatus Direction etc..
In an implementation, with the rapid development of computer technology, sound search occupies biggish make in a search engine With specific gravity, it is able to achieve sound function of search mostly in current audio equipment, provides convenience for the daily life of user with work, Audio frequency apparatus can carry out semantics recognition by the sound instruction issued to user, be exported by the result of semantics recognition to user Corresponding audio data.In general, audio frequency apparatus can only carry out semantic analysis to the sound of user, the audio needed is exported to user Data.Camera parts are also configured in some audio frequency apparatuses, user can be taken pictures by audio frequency apparatus or video, but with The growth of people's demand, traditional audio frequency apparatus are no longer satisfied people's lives demand.Under same environment, it may deposit In multiple people, in the case, when a certain user need to use audio frequency apparatus, the audio data which issues may Other people can be interfered.For example, Papa and Mama want to listen first song, and child is doing the homework, if beaten in one family Audio frequency apparatus is opened, then child can be interfered.Audio frequency apparatus, can be to it during exporting audio data to target user Other people interfere, and are not able to satisfy the demand of the orientation sounding of user, cause poor user experience.For this purpose, the embodiment of the present application mentions For a kind of technical solution for being able to solve the above problem, the following contents can specifically include:
It can be configured with the mechanism that can obtain customer position information, for example, can be by being carried out to image in audio frequency apparatus The mode of analysis determines customer position information, alternatively, can determine customer position information etc. by image and distance measurement mode. It specifically, such as may include camera, rangefinder component in audio frequency apparatus, when certain user (i.e. target user) needs using sound When orienting the function of sounding in frequency equipment, audio frequency apparatus can open camera, at this point, target user can stand on apart from sound The designated position of frequency equipment certain distance can preset camera in above-mentioned specified location and obtain image, and by the figure User as in is as target user.After target user can stand on designated position, camera can shoot the designated position The image at place, and can be using the user in the image as the target user for needing to orient output audio data to it.Then, mesh Mark user can arbitrarily walk about, and in the process, camera can continue the image of photographic subjects user, and can be to shooting Image is analyzed, and determines the location information of target user.
In practical applications, other than it can determine the location information of user by camera, Laser Measuring can also be used The components such as distance meter, electronic distance meter obtain the location information of user, specifically may be set according to actual conditions, the application is implemented Example does not limit this.
In step S104, according to the location information of target user, audio output direction and sound output are determined.
Wherein, audio output unit, which can be, movably orients sound-generating element in audio frequency apparatus, for example, can be determined To the ultrasonic horn etc. of sounding.
In an implementation, location information may include the distance between target user and audio frequency apparatus and target user's phase For the direction etc. of audio frequency apparatus, wherein the distance between target user and audio frequency apparatus can specifically such as 3 meters or 2 meters, mesh The direction etc. that the object of reference that mark user can be due east direction relative to the direction of audio frequency apparatus or Mr. Yu specifies relatively determines.It is logical After crossing the location information where step S102 obtains target user, can direction according to target user relative to audio frequency apparatus, It determines the audio output direction of audio output unit, and can determine that audio is defeated according to the distance between user and audio frequency apparatus The sound output of unit out.Wherein, audio frequency apparatus can set output work according to preset sound output rule Rate, output power can be to apart from directly proportional, i.e., distance is bigger, and output power is bigger, and proportionality coefficient can be set as 1,3 or 5 Deng.For example, the customer position information obtained may is that user in the position apart from 2 meters of audio frequency apparatus due east, and if audio is defeated The proportionality coefficient between power and distance is set as 2 out, then the audio output direction of audio frequency apparatus can be due east direction, sound Frequency output power can be 4 (i.e. 2*2) watt.
In step s 106, it according to audio output direction and sound output, controls audio output unit and exports audio Data.
Wherein, audio data can be the audio data stored in audio frequency apparatus or audio frequency apparatus and pass through wired or nothing What the mode of line was connected to is stored on the terminal device or server of audio data, and the audio data got is also possible to After audio frequency apparatus is connected to terminal device or server, the audio data etc. that obtains online.
In an implementation, target user can with the title of input audio data (title of specific such as song title, cross-talk), Specifically, it can be set in audio frequency apparatus just like the voice collectings component such as microphone, voice collecting component may be at opening in real time State is opened, when target user issues the voice of the title of the audio data, audio frequency apparatus can pass through the voice of audio frequency apparatus Acquisition component, the voice that acquisition target user issues, to obtain the title of the audio data of target user's input.Audio frequency apparatus After getting corresponding audio data title, available corresponding audio data, specifically, audio frequency apparatus can be specified to certain Smart machine (such as mobile phone, tablet computer or server) send the acquisition request of the audio data, which can be with The audio data is obtained, and the audio data is sent to audio frequency apparatus, the audio number that audio frequency apparatus exports thus to obtain needs According to.It, can be according to the audio output direction determined step S104 and audio output function after audio frequency apparatus gets the audio data Rate controls audio output unit towards audio output direction, and with sound output, exports the audio number to target user According to.
The embodiment of the present application provides a kind of output method of audio, by obtaining the location information of the target user, root According to the location information of the target user, audio output direction and sound output are determined, according to the audio output direction With the sound output, the audio output unit output audio data is controlled.In this way, in the presence of more people, Audio frequency apparatus can navigate to target user, and audio data is exported to target user, without causing to do to other people It disturbs, meanwhile, audio frequency apparatus can also track the real time position of user, and adjust outbound course according to the location information of target user And output power, the use demand of user is met, user experience is improved.
Embodiment two
As shown in Fig. 2, the embodiment of the present application provides a kind of output method of audio, the executing subject of this method can be sound Frequency equipment, the audio frequency apparatus can as speaker, the video equipment with audio output function and it is other have audio output function The equipment etc. of energy, includes audio output unit in the audio frequency apparatus, which can move in the audio frequency apparatus, The orientation output of audio may be implemented in this method.This method can specifically include following steps:
In step S202, the orientation sounding instruction of input is received.
Wherein, the instruction of orientation sounding can be the finger that instruction audio frequency apparatus carries out audio output using orientation beep pattern It enables, what the instruction of orientation sounding can be inputted by user arbitrarily realizes comprising the voice of some or multiple keywords, pass therein Key word can be " unlatching ", " starting ", " orientation " etc..
In an implementation, the components such as microphone can be set in audio frequency apparatus, when user, which issues audio frequency apparatus, to be instructed, Microphone acquires the instruction that user issues, and audio frequency apparatus can carry out semantic analysis to the instruction, if determining orientation after analysis Keyword comprising starting orientation sounding in sounding instruction, then audio frequency apparatus can determine the instruction of user's input for orientation sounding Instruction, at this point, current operating mode can be switched to orientation beep pattern by audio frequency apparatus.For example, user can be towards sound Position where frequency equipment issues the voice of " please open direct sound mode ", and the microphone of audio frequency apparatus can acquire user's hair Voice out, audio frequency apparatus can carry out semantic analysis to the voice, can be extracted from above-mentioned voice by semantic analysis The keywords such as " unlatching " and " orientation ", at this point, audio frequency apparatus can determine the voice of user's input for orientation sounding instruction.
In step S204, Application on Voiceprint Recognition is carried out to orientation sounding instruction, according to Application on Voiceprint Recognition as a result, determining input orientation The user identifier of sounding instruction target user.
Wherein, Application on Voiceprint Recognition, which can be, carries out signal processing to orientation sounding instruction, then extracts orientation sounding instruction Vocal print feature, the identity of user is identified according to vocal print feature.User identifier can be the name of user, coding etc..
In an implementation, it can store the vocal print feature of multiple and different users in audio frequency apparatus, and be respectively arranged with correspondence User identifier.After audio frequency apparatus receives orientation sounding instruction, signal processing can be carried out to orientation sounding instruction, extracted Vocal print feature in orientation sounding instruction out, it is then possible to which the vocal print of the multiple and different users stored in audio frequency apparatus is special In sign, the vocal print feature to match with the vocal print feature of the said extracted is searched, the corresponding use of the vocal print feature found is obtained Family mark, the as user identifier of the target user of input orientation sounding instruction.
In addition, if the vocal print feature that the vocal print feature extracted with this matches is not stored in audio frequency apparatus, The vocal print feature, and one corresponding user identifier of vocal print feature distribution or setting to extract can then be saved.
In step S206, corresponding first face characteristic of user identifier is obtained.
Wherein, the first face characteristic can be divided into geometrical characteristic and two kinds of characteristic feature, for example, the geometry of face characteristic is special Sign can be the set relation between the facial characteristics such as eyes, nose and mouth, such as distance, area and angle.
In an implementation, it can store the face characteristic of multiple users in audio frequency apparatus, and be provided with corresponding user identifier, Wherein, it can be between user identifier, face characteristic and vocal print feature three mutual corresponding.As shown in table 1.
Table 1
User identifier Face characteristic Vocal print feature
User A Face characteristic 1 Vocal print feature 1
User B Face characteristic 2 Vocal print feature 2
After the processing of S204 determines the user identifier of target user through the above steps, can by the user identifier, In the face characteristic of the multiple users stored in audio frequency apparatus, the corresponding face characteristic of the user identifier is found.For example, user A The corresponding user identifier of vocal print feature be UserA, and the corresponding face of the user identifier UserA that stores in audio frequency apparatus is special Sign is the data acquisition system W of one group of characterization face information, then the face characteristic in available above-mentioned data acquisition system W is that target is used The face characteristic at family.
After obtaining the face characteristic of target user by above-mentioned treatment process, pair for needing to be oriented sounding can be confirmed As (i.e. target user), in practical applications, it can obtain from the image that camera is shot and match with the first face characteristic Facial image, and the corresponding user of the facial image is determined as target user, concrete processing procedure may refer to following steps Rapid S208~step S214.
In step S208, image is shot by camera.
Wherein, the image of shooting can be the image with one or more different user faces.
In an implementation, the camera being equipped in audio frequency apparatus can have one or more, and camera can be according to reality The camera for needing can move freely has face information if detected during camera is mobile in shooting picture, The image of the picture can then be shot, wherein Face datection, which refers to, detects from input picture and extract facial image, usually adopts Classified with haar feature and Adaboost algorithm training cascade classifier to each piece in image.If a certain rectangle region Domain has passed through cascade classifier, then is identified as facial image.Meanwhile when shooting image, camera can be by continuous Mobile, shooting can obtain all images with user's face in range, can also be shot by multiple cameras and have user The image of face may include the face of one or more users in the image of shooting.
In step S210, extract the second face characteristic in image, by the second face characteristic and the first face characteristic into Row matching.
In an implementation, recognition of face can be carried out to the image of shooting, it is then possible to carry out feature to the face recognized It extracts, is compared with the face characteristic of determining target user (i.e. the first face characteristic).Wherein, when in the image of shooting When only one face, the face characteristic extracted can directly be compared with the first face characteristic of determining target user It is right, if containing multiple faces in the image of shooting, feature extraction can be carried out to the facial image in the image one by one, and will The multiple face characteristics extracted are matched with the first face characteristic respectively.For example, shooting image in have user A, user B and User C carries out the extraction of face characteristic to these three users respectively, to the face characteristic of extraction, passes through digital collection W, Y respectively The face characteristic of user A, user B and user C are characterized with Z, it is then possible to which it is special with the first face of target user respectively The characterization data set X of sign is matched.
In step S212, if the second face characteristic matches with the first face characteristic, the second face characteristic is obtained Corresponding facial image.
In an implementation, if there are the first face characteristics of certain face characteristic and target user to match in shooting image, Then obtain the facial image.For example, camera has taken multiple images, the face of different user is contained in every image, such as The characterization set W of face characteristic and the characterization set X of the first face characteristic of the fruit wherein user A in an image matches, Then show to find target user in the picture, at this point it is possible to obtain the facial image of the user A.
In step S214, the corresponding user of above-mentioned facial image is determined as target user.
In step S216, the image shot by camera determines the position where the face of target user, according to mesh The position where the face of user is marked, determines directional information locating for target user.
In an implementation, the motion track that target user can be captured by the follow shot of camera, in follow shot In image, when being matched to the facial image, the position where target user has been determined that.It is being determined that target user institute is in place When setting, can by camera or other can obtain directional information locating for target user with the equipment of ranging.
In step S218, the distance between target user and audio frequency apparatus are obtained based on light pulse ranging mechanism.
Wherein, based on light pulse ranging mechanism can be by flight time telemetry (TOF, Time Of Flight), Structure light etc..
In an implementation, to obtain user and audio output list for flight time telemetry (TOF, time of flight) The distance between member information, as shown in figure 3, can continuously to emit light pulse by transmitter (generally invisible for audio frequency apparatus Light) target user is arrived, then, detector is received from the reflected light pulse of target user, passes through the flight of detecting optical pulses (round-trip) time calculates the distance between target user and audio frequency apparatus, and the specific method for obtaining location information can basis The difference of application scenarios and it is different, the embodiment of the present application does not limit this.
In step S220, according to the distance between target user and audio frequency apparatus, sound output is determined.
In an implementation, the distance between target user and audio frequency apparatus (or audio output unit) can determine audio output The size of the sound output of unit, for example, working as the distance between target user and audio frequency apparatus (or audio output unit) When shorter, the output power of audio output unit can be smaller, in this way, can both save resource, also can satisfy user demand. When the distance between target user and audio frequency apparatus (or audio output unit) are larger, the output power of audio output unit can To become larger, in this way, target user can be allowed to hear that sound size does not change because of the change of distance, target The sound output of the distance between user and audio frequency apparatus and audio output unit can be positive correlation or proportional relation, tool The proportionality coefficient of body can be adjusted according to default rule, while can also be directed to by preset output power scheme The difference of distance determines the sound output of audio output unit, specific to determine that scheme be according to practical application scene Different and different, the application does not limit this.
In addition, the distance between target user and audio frequency apparatus can change with the change of user location, in target When user location changes, the distance between available target user and audio frequency apparatus, according to this distance to audio output The sound output of unit is adjusted.For example, when the target user's location information got are as follows: in the positive east of audio frequency apparatus To 3 meters, if the proportionality coefficient of distance and sound output is 2, the sound output of audio output unit can be determined It is 6 watts, after this, if target user moves along the due west direction of audio frequency apparatus, user has advanced one one second time Rice (can obtain user in seconds at a distance from audio frequency apparatus), then the distance between user and audio frequency apparatus become 2 meters, Sound output can be so adjusted to 4 watts, and so on, determine the sound output of audio output unit.
In step S222, direction according to target user relative to audio frequency apparatus determines the motion track of target user.
In an implementation, it when target user changes relative to the direction of audio frequency apparatus, obtains on each time point Thus relative direction between target user and audio frequency apparatus constitutes the motion track of target user.Such as when target user exists When 2 meters of audio frequency apparatus due east direction, user moves along relative to audio frequency apparatus due west direction, then the motion track of user is exactly It is mobile toward due west direction in each time point.
In step S224, according to the motion track of target user, shifting of the audio output unit in audio frequency apparatus is determined Dynamic rail mark.
It in an implementation, can be true by the motion track of target user after the motion track of audio frequency apparatus acquisition target user Fixed motion track of the corresponding audio unit in audio frequency apparatus, for example, when user in audio frequency apparatus due east direction toward positive west To it is mobile when, audio unit is also mobile from due east direction toward due west direction in audio frequency apparatus, simultaneously with the motion track of user And in the same direction.
The processing mode of above-mentioned steps S224 can be varied, other than it can handle through the above way, works as presence It when multiple target users, can also be handled by other various ways, specifically, target user includes multiple, audio output list Member includes multiple, the corresponding one or more audio output units of each target user, at this point, the processing of above-mentioned steps S224 can be with It is accomplished by the following way: according to the motion track of each target user, determining audio corresponding with each user identifier respectively The motion track of output unit.
In an implementation, according to the motion track of each target user, the motion track of corresponding audio output unit is determined, As shown in figure 4, the motion track of user A, user B and the corresponding audio output unit 1,2,3 of user C is not interfere with each other , the motion track of each audio output unit is only determined by its corresponding user's motion track.
In step S226, based on the motion track in audio frequency apparatus, audio output direction is determined.
In an implementation, audio output unit can move along track in audio frequency apparatus and be moved, thus realize with The outbound course of the synchronous moving direction of target user and audio.
In step S228, according to audio output direction and sound output, audio output unit output directional is controlled Sounding instructs corresponding audio data.
The concrete processing procedure of above-mentioned S228 may refer to the related content of S106 in above-described embodiment one, no longer superfluous herein It states.
The embodiment of the present application provides a kind of output method of audio, by obtaining the location information of the target user, root According to the location information of the target user, audio output direction and sound output are determined, according to the audio output direction With the sound output, the audio output unit output audio data is controlled.In this way, in the presence of more people, Audio frequency apparatus can navigate to target user, and audio data is exported to target user, without causing to do to other people It disturbs, meanwhile, audio frequency apparatus can also track the real time position of user, and adjust outbound course according to the location information of target user And output power, the use demand of user is met, user experience is improved.
Embodiment three
The above are the output methods of audio provided by the embodiments of the present application, are based on same thinking, the embodiment of the present application is also A kind of output device of audio is provided, described device includes audio output unit, which can set in the audio Standby middle movement, as shown in Figure 5.
The output device of the audio includes: to obtain module 501, determining module 502, output module 503, in which:
Module 501 is obtained, for obtaining the location information of target user;
Determining module 502 determines audio output direction and audio output for the location information according to the target user Power;
Output module 503, for it is defeated to control the audio according to the audio output direction and the sound output Unit exports audio data out.
In the embodiment of the present application, the location information of the target user includes: the target user relative to the sound The direction of frequency equipment and the distance between the target user and the audio frequency apparatus;
The determining module 502, comprising:
Direction-determining unit determines that audio is defeated for the direction according to the target user relative to the audio frequency apparatus Direction out;
Power determining unit, for determining that audio is defeated according to the distance between the target user and the audio frequency apparatus Power out.
It in the embodiment of the present application, further include camera, described device in described device further include:
Mark module is determined, for determining the user identifier of the target user;
Characteristic module is obtained, for obtaining corresponding first face characteristic of the user identifier;
Matching module, for obtaining the people to match with first face characteristic from the image that the camera is shot Face image, and the corresponding user of the facial image is determined as the target user.
In the embodiment of the present application, the determining mark module, comprising:
Receiving unit, orientation sounding instruction for receiving input;
Recognition unit carries out Application on Voiceprint Recognition for instructing to the orientation sounding, according to the Application on Voiceprint Recognition as a result, determining Input the user identifier of the orientation sounding instruction target user.
In the embodiment of the present application, the determining module 502, comprising:
Determine direction unit, the image for shooting by the camera determines the face place of the target user Position determine the target user relative to the audio frequency apparatus according to the position where the face of the target user Direction;
Distance unit is determined, for obtaining between the target user and the audio frequency apparatus based on light pulse ranging mechanism Distance.
In the embodiment of the present application, the matching module, comprising:
Image acquisition unit, for shooting image by the camera;
Extraction unit, for extracting the second face characteristic in described image, by second face characteristic and described the One face characteristic is matched;
It determines elementary area, if matched for second face characteristic and first face characteristic, obtains The corresponding facial image of second face characteristic.
In the embodiment of the present application, the determining outbound course unit, is used for:
Direction according to the target user relative to the audio frequency apparatus determines the motion track of the target user;
According to the motion track of the target user, movement of the audio output unit in the audio frequency apparatus is determined Track;
Based on the motion track in the audio frequency apparatus, the audio output direction is determined.
In the embodiment of the present application, the target user includes multiple, and the audio output unit includes multiple, Mei Gesuo The corresponding one or more audio output units of target user are stated, the determining outbound course unit is used for:
According to the motion track of each target user, determine that audio corresponding with each user identifier is defeated respectively The motion track of unit out.
The embodiment of the present application provides a kind of output device of audio, by obtaining the location information of the target user, root According to the location information of the target user, audio output direction and sound output are determined, according to the audio output direction With the sound output, the audio output unit output audio data is controlled.In this way, in the presence of more people, Audio frequency apparatus can navigate to target user, and audio data is exported to target user, without causing to do to other people It disturbs, meanwhile, audio frequency apparatus can also track the real time position of user, and adjust outbound course according to the location information of target user And output power, the use demand of user is met, user experience is improved.
Example IV
Fig. 6 is the hardware structural diagram for realizing a kind of audio frequency apparatus of each embodiment of the application,
The audio frequency apparatus 600 includes but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, defeated Enter unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor The components such as 610 and power supply 611.It will be understood by those skilled in the art that audio frequency apparatus structure shown in Fig. 6 is not constituted Restriction to audio frequency apparatus, audio frequency apparatus may include than illustrating more or fewer components, perhaps combine certain components or Different component layouts, which further includes audio output unit, which can be in the audio frequency apparatus Middle movement.In the embodiment of the present application, audio frequency apparatus includes but is not limited to speaker etc..
Wherein, processor 610, for obtaining the location information of the target user.
Processor 610 is also used to the location information according to the target user, determines audio output direction and audio output Power.
Processor 610 is also used to that it is defeated to control the audio according to the audio output direction and the sound output Unit exports audio data out.
In addition, the location information of the target user includes direction of the target user relative to the audio frequency apparatus, And the distance between the target user and the audio frequency apparatus;
The processor 610, is also used to the direction according to the target user relative to the audio frequency apparatus, determines audio Outbound course;
The processor 610 is also used to determine audio according to the distance between the target user and the audio frequency apparatus Output power
In addition, processor 610, is also used to determine the user identifier of the target user.
In addition, the processor 610, is also used to obtain corresponding first face characteristic of the user identifier.
In addition, the processor 610, is also used to obtain from the image that the camera is shot special with first face The facial image to match is levied, and the corresponding user of the facial image is determined as the target user.
In addition, the processor 610, is also used to receive the orientation sounding instruction of input.
In addition, the processor 610, is also used to instruct the orientation sounding progress Application on Voiceprint Recognition, according to the vocal print Recognition result determines the user identifier of the input orientation sounding instruction target user.
In addition, the processor 610, is also used to the image shot by the camera, determine the target user's Position where face determines the target user relative to the sound according to the position where the face of the target user The direction of frequency equipment.
In addition, the processor 610, is also used to obtain the target user and the audio based on light pulse ranging mechanism The distance between equipment.
In addition, the processor 610, is also used to shoot image by the camera.
In addition, the processor 610, is also used to extract the second face characteristic in described image, by second face Feature is matched with first face characteristic.
In addition, the processor 610, it is also used to determining audio output unit corresponding with each user identifier.
In addition, the processor 610, if being also used to second face characteristic and the first face characteristic phase Match, then obtains the corresponding facial image of second face characteristic.
In addition, the processor 610, is also used to the direction according to the target user relative to the audio frequency apparatus, really The motion track of the fixed target user.
In addition, the processor 610, is also used to determine that the audio is defeated based on the motion track in the audio frequency apparatus Direction out.
In addition, the processor 610, is also used to the motion track according to each target user, it is determining and every respectively The motion track of the corresponding audio output unit of a user identifier.
The embodiment of the present application provides a kind of audio frequency apparatus, by obtaining the location information of the target user, according to described The location information of target user determines audio output direction and sound output, according to the audio output direction and described Sound output controls the audio output unit output audio data.In this way, audio is set in the presence of more people It is standby to navigate to target user, and audio data is exported to target user, without being interfered to other people, together When, audio frequency apparatus can also track the real time position of user, and adjust outbound course and defeated according to the location information of target user Power out meets the use demand of user, improves user experience.
It should be understood that the embodiment of the present application in, radio frequency unit 601 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 610 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 601 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 601 can also by wireless communication system and network and other set Standby communication.
Audio frequency apparatus provides wireless broadband internet by network module 602 for user and accesses, and such as user is helped to receive It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 603 can be received by radio frequency unit 601 or network module 602 or in memory 609 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 603 can also provide and sound The relevant audio output of specific function that frequency equipment 600 executes is (for example, call signal receives sound, message sink sound etc. Deng).Audio output unit 603 includes loudspeaker, buzzer and receiver etc..
Input unit 604 is for receiving audio or video signal.Input unit 604 may include graphics processor (Graphics Processing Unit, GPU) 6041 and microphone 6042, graphics processor 6041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 606.Through graphics processor 6041, treated that picture frame can be deposited Storage is sent in memory 609 (or other storage mediums) or via radio frequency unit 601 or network module 602.Mike Wind 6042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 601 is converted in the case where telephone calling model.
Audio frequency apparatus 600 further includes at least one sensor 605, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 6061, and proximity sensor can close when audio frequency apparatus 600 is moved in one's ear Display panel 6061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify audio frequency apparatus posture (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes Sensor 605 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 606 is for showing information input by user or being supplied to the information of user.Display unit 606 can wrap Display panel 6061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 6061.
User input unit 607 can be used for receiving the number or character information of input, and generate the use with audio frequency apparatus Family setting and the related key signals input of function control.Specifically, user input unit 607 include touch panel 6071 and Other input equipments 6072.Touch panel 6071, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 6071 or in touch panel 6071 Neighbouring operation).Touch panel 6071 may include both touch detecting apparatus and touch controller.Wherein, touch detection Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 610, receiving area It manages the order that device 610 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Seed type realizes touch panel 6071.In addition to touch panel 6071, user input unit 607 can also include other input equipments 6072.Specifically, other input equipments 6072 can include but is not limited to physical keyboard, function key (such as volume control button, Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 6071 can be covered on display panel 6061, when touch panel 6071 is detected at it On or near touch operation after, send processor 610 to determine the type of touch event, be followed by subsequent processing device 610 according to touching The type for touching event provides corresponding visual output on display panel 6061.Although in Fig. 6, touch panel 6071 and display Panel 6061 is the function that outputs and inputs of realizing audio frequency apparatus as two independent components, but in some embodiments In, can be integrated by touch panel 6071 and display panel 6061 and realize the function that outputs and inputs of audio frequency apparatus, it is specific this Place is without limitation.
Interface unit 608 is the interface that external device (ED) is connect with audio frequency apparatus 600.For example, external device (ED) may include having Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module Mouthful etc..Interface unit 608 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and By one or more elements that the input received is transferred in audio frequency apparatus 600 or can be used in 600 He of audio frequency apparatus Data are transmitted between external device (ED).
Memory 609 can be used for storing software program and various data.Memory 609 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 409 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 610 is the control centre of audio frequency apparatus, utilizes each of various interfaces and the entire audio frequency apparatus of connection A part by running or execute the software program and/or module that are stored in memory 609, and calls and is stored in storage Data in device 609 execute the various functions and processing data of audio frequency apparatus, to carry out integral monitoring to audio frequency apparatus.Place Managing device 610 may include one or more processing units;Preferably, processor 610 can integrate application processor and modulatedemodulate is mediated Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 610.
Audio frequency apparatus 600 can also include the power supply 611 (such as battery) powered to all parts, it is preferred that power supply 611 Can be logically contiguous by power-supply management system and processor 610, to realize management charging by power-supply management system, put The functions such as electricity and power managed.
Preferably, the embodiment of the present invention also provides a kind of audio frequency apparatus, including processor 610, and memory 609 is stored in On memory 609 and the computer program that can run on the processor 610, the computer program are executed by processor 610 Each process of the output method embodiment of the above-mentioned audio of Shi Shixian, and identical technical effect can be reached, to avoid repeating, this In repeat no more.
Embodiment five
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of the output method embodiment of above-mentioned audio when being executed by processor, and Identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as Read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
The embodiment of the present application provides a kind of computer readable storage medium, and the position by obtaining the target user is believed Breath, according to the location information of the target user, determines audio output direction and sound output, according to the audio output Direction and the sound output control the audio output unit output audio data.In this way, more people there are the case where Under, audio frequency apparatus can navigate to target user, and audio data is exported to target user, without causing to other people Interference, meanwhile, audio frequency apparatus can also track the real time position of user, and adjust output side according to the location information of target user To and output power, meet the use demand of user, improve user experience.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product. Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Formula.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal Replacement, improvement etc., should be included within the scope of the claims of this application.

Claims (10)

1. a kind of output method of audio, which is characterized in that be applied to audio frequency apparatus, include audio output in the audio frequency apparatus Unit, which comprises
Obtain the location information of target user;
According to the location information of the target user, audio output direction and sound output are determined;
According to the audio output direction and the sound output, the audio output unit output audio data is controlled.
2. the method according to claim 1, wherein the location information of the target user includes that the target is used Direction and the target user and the audio frequency apparatus the distance between of the family relative to the audio frequency apparatus;
The location information according to the target user, determines audio output direction and sound output, comprising:
Direction according to the target user relative to the audio frequency apparatus determines audio output direction;
According to the distance between the target user and the audio frequency apparatus, sound output is determined.
3. the method according to claim 1, wherein further include camera in the audio frequency apparatus, the acquisition Before the location information of target user, the method also includes:
Determine the user identifier of the target user;
Obtain corresponding first face characteristic of the user identifier;
Obtain the facial image to match with first face characteristic from the image that the camera is shot, and by the people The corresponding user of face image is determined as the target user.
4. according to the method described in claim 3, it is characterized in that, the user identifier of the determination target user, comprising:
Receive the orientation sounding instruction of input;
The orientation sounding is instructed and carries out Application on Voiceprint Recognition, the orientation sounding is inputted as a result, determining according to the Application on Voiceprint Recognition The user identifier of the target user of instruction.
5. according to the method described in claim 2, it is characterized in that, the location information for obtaining target user, comprising:
The image shot by the camera determines the position where the face of the target user, is used according to the target Position where the face at family determines direction of the target user relative to the audio frequency apparatus;
The distance between the target user and the audio frequency apparatus are obtained based on light pulse ranging mechanism.
6. according to the method described in claim 3, it is characterized in that, acquisition and the institute from the image that the camera is shot State the facial image that the first face characteristic matches, comprising:
Image is shot by the camera;
The second face characteristic in described image is extracted, by second face characteristic and first face characteristic progress Match;
If second face characteristic matches with first face characteristic, it is corresponding to obtain second face characteristic Facial image.
7. according to the method described in claim 2, it is characterized in that, described set according to the target user relative to the audio Standby direction determines audio output direction, comprising:
Direction according to the target user relative to the audio frequency apparatus determines the motion track of the target user;
According to the motion track of the target user, moving rail of the audio output unit in the audio frequency apparatus is determined Mark;
Based on the motion track in the audio frequency apparatus, the audio output direction is determined.
8. a kind of output device of audio, which is characterized in that include audio output unit in described device, described device includes:
Module is obtained, for obtaining the location information of target user;
Determining module determines audio output direction and sound output for the location information according to the target user;
Output module, for controlling the audio output unit according to the audio output direction and the sound output Export audio data.
9. a kind of audio frequency apparatus, which is characterized in that including processor, memory and be stored on the memory and can be described The computer program run on processor is realized when the computer program is executed by the processor as in claim 1 to 7 The step of output method of described in any item audios.
10. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence realizes the output method of the audio as described in any one of claims 1 to 7 when the computer program is executed by processor The step of.
CN201811102136.8A 2018-09-20 2018-09-20 Audio output method and device and audio equipment Active CN109284081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811102136.8A CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811102136.8A CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Publications (2)

Publication Number Publication Date
CN109284081A true CN109284081A (en) 2019-01-29
CN109284081B CN109284081B (en) 2022-06-24

Family

ID=65181752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811102136.8A Active CN109284081B (en) 2018-09-20 2018-09-20 Audio output method and device and audio equipment

Country Status (1)

Country Link
CN (1) CN109284081B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510571A (en) * 2019-01-30 2020-08-07 京瓷办公信息系统株式会社 Image forming apparatus with a toner supply device
CN112565973A (en) * 2020-12-21 2021-03-26 Oppo广东移动通信有限公司 Terminal, terminal control method, terminal control device and storage medium
CN113050076A (en) * 2021-03-25 2021-06-29 京东方科技集团股份有限公司 Method, device and system for sending directional audio information and electronic equipment
CN113055810A (en) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 Sound effect control method, device, system, vehicle and storage medium
CN113676818A (en) * 2021-08-02 2021-11-19 维沃移动通信有限公司 Player, control method and control device thereof, and computer-readable storage medium
CN114615542A (en) * 2022-03-25 2022-06-10 联想(北京)有限公司 Control method, control device and content sharing system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (en) * 2005-10-17 2006-10-25 华为技术有限公司 Method and device for collecting user behavior characteristics
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
CN105549947A (en) * 2015-12-21 2016-05-04 联想(北京)有限公司 Audio device control method and electronic device
CN105632540A (en) * 2015-12-18 2016-06-01 湖南人文科技学院 Somatosensory detection and sound control based intelligent music play system
CN106325808A (en) * 2016-08-26 2017-01-11 北京小米移动软件有限公司 Sound effect adjustment method and device
US20170142533A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd. Audio apparatus adaptable to user position
CN106973160A (en) * 2017-03-27 2017-07-21 广东小天才科技有限公司 A kind of method for secret protection, device and equipment
CN107170440A (en) * 2017-05-31 2017-09-15 宇龙计算机通信科技(深圳)有限公司 Orient transaudient method, device, mobile terminal and computer-readable recording medium
CN107623776A (en) * 2017-08-24 2018-01-23 维沃移动通信有限公司 A kind of method for controlling volume, system and mobile terminal
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN107992816A (en) * 2017-11-28 2018-05-04 广东小天才科技有限公司 One kind is taken pictures searching method, device and electronic equipment
CN108055403A (en) * 2017-12-21 2018-05-18 努比亚技术有限公司 A kind of audio method of adjustment, terminal and computer readable storage medium
CN108319440A (en) * 2017-12-21 2018-07-24 维沃移动通信有限公司 Audio-frequency inputting method and mobile terminal
CN108491181A (en) * 2018-03-27 2018-09-04 联想(北京)有限公司 A kind of audio output device and method
CN108509856A (en) * 2018-03-06 2018-09-07 深圳市沃特沃德股份有限公司 Audio regulation method, device and stereo set

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (en) * 2005-10-17 2006-10-25 华为技术有限公司 Method and device for collecting user behavior characteristics
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
US20170142533A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd. Audio apparatus adaptable to user position
CN105632540A (en) * 2015-12-18 2016-06-01 湖南人文科技学院 Somatosensory detection and sound control based intelligent music play system
CN105549947A (en) * 2015-12-21 2016-05-04 联想(北京)有限公司 Audio device control method and electronic device
CN106325808A (en) * 2016-08-26 2017-01-11 北京小米移动软件有限公司 Sound effect adjustment method and device
CN106973160A (en) * 2017-03-27 2017-07-21 广东小天才科技有限公司 A kind of method for secret protection, device and equipment
CN107170440A (en) * 2017-05-31 2017-09-15 宇龙计算机通信科技(深圳)有限公司 Orient transaudient method, device, mobile terminal and computer-readable recording medium
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN107623776A (en) * 2017-08-24 2018-01-23 维沃移动通信有限公司 A kind of method for controlling volume, system and mobile terminal
CN107992816A (en) * 2017-11-28 2018-05-04 广东小天才科技有限公司 One kind is taken pictures searching method, device and electronic equipment
CN108055403A (en) * 2017-12-21 2018-05-18 努比亚技术有限公司 A kind of audio method of adjustment, terminal and computer readable storage medium
CN108319440A (en) * 2017-12-21 2018-07-24 维沃移动通信有限公司 Audio-frequency inputting method and mobile terminal
CN108509856A (en) * 2018-03-06 2018-09-07 深圳市沃特沃德股份有限公司 Audio regulation method, device and stereo set
CN108491181A (en) * 2018-03-27 2018-09-04 联想(北京)有限公司 A kind of audio output device and method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510571A (en) * 2019-01-30 2020-08-07 京瓷办公信息系统株式会社 Image forming apparatus with a toner supply device
CN111510571B (en) * 2019-01-30 2022-03-18 京瓷办公信息系统株式会社 Image forming apparatus with a toner supply device
CN112565973A (en) * 2020-12-21 2021-03-26 Oppo广东移动通信有限公司 Terminal, terminal control method, terminal control device and storage medium
WO2022134910A1 (en) * 2020-12-21 2022-06-30 Oppo广东移动通信有限公司 Terminal, terminal control method and apparatus, and storage medium
CN113055810A (en) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 Sound effect control method, device, system, vehicle and storage medium
CN113050076A (en) * 2021-03-25 2021-06-29 京东方科技集团股份有限公司 Method, device and system for sending directional audio information and electronic equipment
CN113676818A (en) * 2021-08-02 2021-11-19 维沃移动通信有限公司 Player, control method and control device thereof, and computer-readable storage medium
WO2023011364A1 (en) * 2021-08-02 2023-02-09 维沃移动通信有限公司 Playing device, control method and control apparatus therefor, and computer-readable storage medium
CN113676818B (en) * 2021-08-02 2023-11-10 维沃移动通信有限公司 Playback apparatus, control method and control device thereof, and computer-readable storage medium
CN114615542A (en) * 2022-03-25 2022-06-10 联想(北京)有限公司 Control method, control device and content sharing system

Also Published As

Publication number Publication date
CN109284081B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN109284081A (en) A kind of output method of audio, device and audio frequency apparatus
CN103578474B (en) A kind of sound control method, device and equipment
CN110495819B (en) Robot control method, robot, terminal, server and control system
CN107835367A (en) A kind of image processing method, device and mobile terminal
CN108307102B (en) Information display method, apparatus and system
CN107817939A (en) A kind of image processing method and mobile terminal
CN109213732A (en) A kind of method, mobile terminal and computer readable storage medium improving photograph album classification
CN107592468A (en) A kind of shooting parameter adjustment method and mobile terminal
CN109461117A (en) A kind of image processing method and mobile terminal
CN108989672A (en) A kind of image pickup method and mobile terminal
CN110505403A (en) A kind of video record processing method and device
CN107705251A (en) Picture joining method, mobile terminal and computer-readable recording medium
CN108052368B (en) A kind of application display interface control method and mobile terminal
CN109743498A (en) A kind of shooting parameter adjustment method and terminal device
CN108628568A (en) A kind of display methods of information, device and terminal device
CN107845057A (en) One kind is taken pictures method for previewing and mobile terminal
CN107786811B (en) A kind of photographic method and mobile terminal
CN107679514A (en) A kind of face identification method and electronic equipment
CN110166691A (en) A kind of image pickup method and terminal device
CN110490897A (en) Imitate the method and electronic equipment that video generates
CN108848313A (en) A kind of more people's photographic methods, terminal and storage medium
CN113365085B (en) Live video generation method and device
CN108881544A (en) A kind of method taken pictures and mobile terminal
CN107888833A (en) A kind of image capturing method and mobile terminal
CN109819167A (en) A kind of image processing method, device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant