CN104092936A - Automatic focusing method and apparatus - Google Patents

Automatic focusing method and apparatus Download PDF

Info

Publication number
CN104092936A
CN104092936A CN201410261049.2A CN201410261049A CN104092936A CN 104092936 A CN104092936 A CN 104092936A CN 201410261049 A CN201410261049 A CN 201410261049A CN 104092936 A CN104092936 A CN 104092936A
Authority
CN
China
Prior art keywords
acoustic information
sound source
source position
sound
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410261049.2A
Other languages
Chinese (zh)
Other versions
CN104092936B (en
Inventor
唐明勇
刘华一君
周志农
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201410261049.2A priority Critical patent/CN104092936B/en
Publication of CN104092936A publication Critical patent/CN104092936A/en
Application granted granted Critical
Publication of CN104092936B publication Critical patent/CN104092936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention relates to an automatic focusing method and apparatus, and belongs to the technical field of photography. The method comprises: during a focusing process, acquiring sound information of an environment; according to the sound information, analyzing a sound source position of the sound information; and performing automatic focusing on a target object at the sound source position. According to the invention, a sounding object is focused through the sound source position; the problem is solved that in a conventional touch control type automatic focusing method, focusing has to be controlled through a touch screen by a user, and in case that the user is at a state when he cannot conveniently operate an electronic device, such as a state when the user holds a tablet device by two hands and a state when the user controls the electronic device by use of a remote controller, automatic focusing cannot be applied; and the effect of normal focusing when the user cannot conveniently operate the electronic device is realized.

Description

Atomatic focusing method and device
Technical field
The present invention relates to camera work field, particularly a kind of Atomatic focusing method and device.
Background technology
Focusing refers to by Focusing mechanism change object distance and position apart in camera, makes the process of the imaging clearly of made thing body.Along with the fast development of electronic equipment, it is more and more frequent that the various electronic equipments that comprise shoot function use, and people are also more and more higher to the requirement of focus function.
A kind of Atomatic focusing method that correlation technique provides, comprising: electronic equipment, in shooting process, shows by touch-screen the picture of finding a view; Electronic equipment receives the click signal of user on touch-screen, and the object that this click signal is clicked in the picture of finding a view is focused automatically.
Open people is in realizing process of the present disclosure, find that aforesaid way at least exists following defect: although above-mentioned Atomatic focusing method focus process is automatic, but in choosing the process of focusing, the main operation that relies on user, when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with the state of remote controller control electronic equipment, above-mentioned Atomatic focusing method cannot be used.Meanwhile, user's point touching screen also can bring the shake of electronic equipment, affects focus process.
Summary of the invention
In order to solve current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, the problem that this Atomatic focusing method cannot be used, the embodiment of the present invention provides a kind of Atomatic focusing method and device.Described technical scheme is as follows:
According to the first aspect of the embodiment of the present invention, a kind of Atomatic focusing method is provided, described method comprises:
In focus process, gather the acoustic information of environment of living in;
According to described acoustic information, analyze the sound source position of described acoustic information;
Target object to described sound source position is focused automatically.
Optionally, the described sound source position of analyzing described acoustic information according to described acoustic information, comprising:
When described acoustic information is two or more, resolve each acoustic information, obtain the sound characteristic of described acoustic information;
Whether detect described sound characteristic mates with the sound characteristic of default acoustic information;
If described sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of described acoustic information.
Optionally, described method, also comprises:
Obtain the corresponding scene mode of environment of living in;
The acoustic information of selection and described matching scene modes from least one default acoustic information, as described default acoustic information.
Optionally, the described target object to described sound source position is focused automatically, comprising:
Described sound source position is tentatively focused, obtain image information;
In described image information, identify the target object at described sound source position place;
Whether detect described target object is the sound producing body of described acoustic information;
If described target object is the sound producing body of described acoustic information, described target object is focused automatically.
Optionally, described described sound source position is tentatively focused, obtains image information, comprising:
When described sound source position is not within the scope of current camera lens, according to described sound source position adjust described camera lens towards and attitude;
By the described camera lens after adjusting, described sound source position is tentatively focused, and obtain image information.
Optionally, described method, also comprises:
The acoustic information of target object described in continuous collecting;
According to the described acoustic information of continuous collecting, described target object is followed the tracks of to focusing.
According to the second aspect of the embodiment of the present invention, a kind of automatic focusing mechanism is provided, described device comprises:
Sound acquisition module, is configured in focus process, gathers the acoustic information of environment of living in;
Sound source position module, is configured to analyze according to described acoustic information the sound source position of described acoustic information;
Image collection module, is configured to the target object of described sound source position automatically to focus.
Optionally, described sound source position module, also comprises:
Sound resolution unit, characteristic detection unit and sound source position unit;
Described sound resolution unit, is configured to, when described acoustic information is two or more, resolve each acoustic information, obtains the sound characteristic of described acoustic information;
Whether described characteristic detection unit, be configured to detect described sound characteristic and mate with default acoustic information;
Described sound source position unit, is configured to, when described sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of described acoustic information.
Optionally, described device, also comprises:
Scene matching module, is configured to obtain the corresponding scene mode of environment of living in, and from least one default acoustic information, selects the acoustic information with described matching scene modes, as described default acoustic information.
Optionally, described image collection module, also comprises: the unit of tentatively focusing, image identification unit, image detecting element and the unit of automatically focusing;
Described preliminary focusing unit, is configured to described sound source position to carry out tentatively, to defocused, obtaining image information;
Described image identification unit, is configured to identify the target object at described sound source position place in described image information;
Described image detecting element, whether be configured to detect described target object is the sound producing body of described acoustic information;
Described automatic focusing unit, is configured to, when described target object is the sound producing body of described acoustic information, described target object be focused automatically.
Alternatively, described preliminary focusing unit, comprising:
Camera lens is adjusted subelement, be configured to when described sound source position is not within the scope of current camera lens, according to described sound source position adjust described camera lens towards and attitude;
Preliminary focusing subelement, is configured to by the described camera lens after adjusting, described sound source position tentatively be focused, and obtains image information.
Optionally, described device, also comprises: follow the tracks of Focusing module;
Described tracking Focusing module, is configured to the acoustic information of target object described in continuous collecting, and according to the described acoustic information of continuous collecting, described target object is followed the tracks of to focusing.
According to the third aspect of the embodiment of the present invention, a kind of automatic focusing mechanism is provided, described device comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
In focus process, gather the acoustic information of environment of living in;
According to described acoustic information, analyze the sound source position of described acoustic information;
Target object to described sound source position is focused automatically.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
When subject can sounding, by sound source position, sound producing body is focused; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and caused when the state of user in inconvenient operating electronic equipment, Atomatic focusing method cannot be used; And user's point touching screen also can bring the shake of electronic equipment, affect the problem of focus process; Reached and when the state of inconvenient operating electronic equipment, also can normally focus and focus process can not brought because of point touching screen the effect of equipment shake.
Should be understood that, it is only exemplary and explanatory that above general description and details are hereinafter described, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing is herein merged in specification and forms the part of this specification, shows and meets embodiment of the present disclosure, and be used from and explain principle of the present disclosure with specification one.
Fig. 1 is according to the flow chart of a kind of Atomatic focusing method shown in an exemplary embodiment;
Fig. 2 A is according to the flow chart of a kind of Atomatic focusing method shown in another exemplary embodiment;
Fig. 2 B is according to the schematic appearance of the terminal shown in an exemplary embodiment;
Fig. 2 C is according to the schematic diagram of recording default acoustic information in auto-focus process shown in an exemplary embodiment;
Fig. 2 D is according to sound source being carried out the schematic diagram of two-dimensional localization in auto-focus process shown in an exemplary embodiment;
Fig. 2 E be according to shown in an exemplary embodiment the schematic diagram of auto-focus process;
Fig. 3 A is according to the flow chart of a kind of Atomatic focusing method shown in another exemplary embodiment;
Fig. 3 B is according to the schematic diagram of selecting scene mode in auto-focus process shown in an exemplary embodiment;
Fig. 3 C presets the schematic diagram of acoustic information according to the selection in auto-focus process shown in an exemplary embodiment;
Fig. 3 D is according to the schematic diagram of adjusting lens direction and attitude in auto-focus process shown in an exemplary embodiment;
Fig. 4 is according to the block diagram of a kind of automatic focusing mechanism shown in an exemplary embodiment;
Fig. 5 is according to the block diagram of a kind of automatic focusing mechanism shown in another exemplary embodiment.
Fig. 6 is according to the block diagram of a kind of automatic focusing mechanism shown in another exemplary embodiment
By above-mentioned accompanying drawing, the embodiment that the disclosure is clear and definite has been shown, will there is more detailed description hereinafter.These accompanying drawings and text description are not in order to limit the scope of disclosure design by any mode, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
Here will at length to exemplary embodiment, describe, its example shown in the accompanying drawings.When description below relates to accompanying drawing, unless separately there is expression, the same numbers in different accompanying drawings represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they are only the examples with apparatus and method as consistent in some aspects that described in detail in appended claims, of the present disclosure.
Herein described terminal can be shooting mobile phone, camera, video camera and monitoring camera first-class all have the electronic product of shoot function.
Fig. 1 is that the present embodiment is applied to illustrate in terminal with Atomatic focusing method according to the flow chart of a kind of Atomatic focusing method shown in an exemplary embodiment.This Atomatic focusing method can comprise following several step:
In step 102, in focus process, gather the acoustic information of environment of living in;
In step 104, according to acoustic information, analyze the sound source position of acoustic information;
In step 106, the target object of sound source position is focused automatically.
In sum, the Atomatic focusing method that the present embodiment provides, when subject can sounding, focuses to sound producing body by sound source position; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with remote controller or acoustic control mode, control the state of electronic equipment, above-mentioned Atomatic focusing method cannot be used, simultaneously, user's point touching screen also can bring the shake of electronic equipment, affects the problem of focus process; Reached and when the state of inconvenient operating electronic equipment, also can normally focus and focus process can not brought because of point touching screen the effect of equipment shake.
Fig. 2 A is that the present embodiment is applied to illustrate in terminal with Atomatic focusing method according to the method flow diagram of a kind of Atomatic focusing method shown in another exemplary embodiment.This Atomatic focusing method can comprise following several step:
In step 201, in focus process, gather the acoustic information of environment of living in.
Because the present embodiment need to obtain the sound source position of sound producing body, thus need two or more plane or space difference towards test point obtain acoustic information, each test point can be a microphone.
Take terminal as mobile phone be example, in conjunction with reference to figure 2B, it shows the schematic appearance of a mobile phone 20.There is a microphone 22 on the top of this mobile phone 20, and there is another microphone 24 bottom of this mobile phone 20.These 2 microphones can form the acoustic information that two test points obtain mobile phone 20 environment of living in.Alternatively, in order to realize three-dimensional auditory localization, these two or more test points can be realized by the microphone array being arranged in terminal, and this microphone array can be ternary microphone array, quaternary microphone array, five yuan of microphone arrays and hexa-atomic microphone array etc.
In the process of focusing, when terminal receives, start to focus after instruction, two or more plane or space difference towards test point start to gather the acoustic information of environment of living in.
In step 202, resolve acoustic information, obtain the sound characteristic of acoustic information.
Because the ambient sound of terminal environment of living in may mix and form for two or more acoustic information.Wherein, a part is for the effective acoustic information of focus process, the sound sending such as made thing body; And another part is for the invalid acoustic information even disturbing of focus process, such as environmental noise.Conventionally, terminal only needs to focus according to effective acoustic information.For this reason, terminal can identify the acoustic information that this focusing adopts by analyzing the sound characteristic of acoustic information.
The sound characteristic obtaining in the present embodiment can be, but not limited to as cepstrum feature, cepstrum feature is that the log power spectrum of acoustic information is carried out to the feature obtaining after inversefouriertransform, it can further effectively separate sound channel characteristic and drive characteristic, therefore can better disclose the substantive characteristics of acoustic information.
Terminal, after having gathered at least one acoustic information of environment of living in, starts each acoustic information to resolve, and obtains the cepstrum feature of each acoustic information.
It should be noted that, only need to obtain the cepstrum feature of the acoustic information that in two or more test points, any one test point obtains.
In step 203, detect sound characteristic and whether mate with the sound characteristic of default acoustic information.
Default acoustic information can be the built-in acoustic information of terminal self, can be also the acoustic information that user prerecords in terminal.
Such as, user A is often used the own son's of terminal taking image or video, and user A can prerecord own son's acoustic information as default acoustic information.
Again such as, user B is often used the own teacher's of terminal taking teaching content, user B can prerecord own teacher's acoustic information as default acoustic information, as shown in Figure 2 C, user can arrange in interface 21 to click at camera and record button 26 and record one section of sound for default acoustic information.
In focus process, whether terminal can detect the sound characteristic getting and mate with the sound characteristic of default acoustic information.If sound characteristic mates with the sound characteristic of default acoustic information, enter step 204.
It should be noted that, if be n from the acoustic information of environment collection of living in, default acoustic information is 1, and this step need to be carried out n time; If be 1 from the acoustic information of environment collection of living in, default acoustic information is m, and this step need to be carried out m time; If be n from the acoustic information of environment collection of living in, default acoustic information is m, and this step need to be carried out n*m time.
In step 204, if sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of acoustic information.
If in the voice recognition model of the acoustic information obtaining and the voice recognition Model Matching of default acoustic information, the sound source position of this acoustic information of terminal analysis.
The present embodiment can by two or more plane or space difference towards test point to obtain time of advent of same acoustic information poor, analyze sound source position.
Sound source position can comprise Sounnd source direction harmony spacing from.Sounnd source direction refers to that sound source is with respect to the direction of terminal, and sound source distance refers to the distance between sound source and terminal.
This step can comprise following sub-step:
1, by two or more plane or space difference towards test point time difference of obtaining same acoustic information;
Because the locus of each test point is different, same acoustic information arrive time of each test point can be different, can life period poor between mutually.
2, poor the corresponding time of advent at different test points according to same acoustic information, the space length between each test point and delay inequality algorithm calculate this acoustic information with respect to the Sounnd source direction harmony spacing of terminal from.
As shown in Figure 2 D, take auditory localization as two-dimensional localization be example, because the distance a between sound source 23 and microphone 27a is different from the distance b between sound source 23 and microphone 27b, so the sound that this sound source 23 is sent is to arrive microphone 27a poor with the time life period of microphone 27b, the Sounnd source direction α harmony spacing that can calculate sound source 23 and terminal according to the distance c between two microphones and delay inequality algorithm is from d.Certainly, if test point is more than three, can also realize the three-dimensional localization to auditory localization.
In step 205, the target object of sound source position is focused automatically.
After determining sound source position, terminal can be focused automatically according to sound source position.
Such as, as shown in Figure 2 E, sound source position place is a child 29, terminal can collect child 29 acoustic information, and when the sound characteristic of the acoustic information collecting and the sound characteristic of default acoustic information (child) mate, child 29 is carried out to auditory localization, thereby get child 29 sound source position.Then, terminal can be focused automatically according to child 29 sound source position, so that follow-up taking pictures or making a video recording.
In sum, the Atomatic focusing method that the present embodiment provides, when subject can sounding, focuses to sound producing body by sound source position; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with remote controller or acoustic control mode, control the state of electronic equipment, above-mentioned Atomatic focusing method cannot be used, simultaneously, user's point touching screen also can bring the shake of electronic equipment, affects the problem of focus process; Reached and when the state of inconvenient operating electronic equipment, also can normally focus and focus process can not brought because of point touching screen the effect of equipment shake.
Fig. 3 A shows the method flow diagram of a kind of Atomatic focusing method that another embodiment of the present invention provides, and the present embodiment is applied to illustrate in terminal with Atomatic focusing method.This Atomatic focusing method can comprise following several step:
In step 301, in focus process, gather the acoustic information of environment of living in.
Because the present embodiment need to obtain the sound source position of sound producing body, thus need two or more plane or space difference towards test point obtain acoustic information, each test point can be a microphone.
Alternatively, in order to realize three-dimensional auditory localization, these two or more test points can be realized by the microphone array being arranged in terminal, and this microphone array can be ternary microphone array, quaternary microphone array, five yuan of microphone arrays and hexa-atomic microphone array etc.
In the process of focusing, when terminal receives, start to focus after instruction, two or more plane or space difference towards test point start to gather the acoustic information of environment of living in.
In step 302, resolve acoustic information, obtain the sound characteristic of acoustic information.
Because the ambient sound of terminal environment of living in may mix and form for two or more acoustic information.Wherein, a part is for the effective acoustic information of focus process, the sound sending such as made thing body; And another part is for the invalid acoustic information even disturbing of focus process, such as environmental noise.Conventionally, terminal only needs to focus according to effective acoustic information.For this reason, terminal can identify the acoustic information that this focusing adopts by analyzing the sound characteristic of acoustic information.
The sound characteristic obtaining in the present embodiment can be, but not limited to as cepstrum feature, cepstrum feature is that the log power spectrum of acoustic information is carried out to the feature obtaining after inversefouriertransform, it can further effectively separate sound channel characteristic and drive characteristic, therefore can better disclose the substantive characteristics of acoustic information.
Terminal, after having gathered at least one acoustic information of environment of living in, starts each acoustic information to resolve, and obtains the cepstrum feature of each acoustic information.
It should be noted that, only need to obtain the cepstrum feature of the acoustic information that in two or more test points, any one test point obtains.
In step 303, obtain the corresponding scene mode of environment of living in;
Terminal can be obtained the corresponding scene mode of environment of living in by following two kinds of methods:
1) receive user's selected scene mode at least one default scene mode;
Also, terminal can provide several scene modes in advance, and user can select in several scene modes that provide in advance.Then, terminal receives user's selected scene mode at least one default scene mode.Scene mode includes but not limited to: children's scene mode, party scene mode, cycling track scene mode, classroom scene mode, conference scenario pattern etc.
For example, user need to take people, selects party scene mode; User need to take automobile, selects cycling track scene mode; User need to take teacher's teaching, selects classroom scene mode, as shown in Figure 3 B.
2) terminal is selected scene mode automatically by current geographic position environment of living in;
For example, terminal, through GPS location, determines that current geographic position is in an assembly place, and terminal set scene pattern is party scene mode; Terminal, through GPS location, determines that current geographic position is in a cycling track, and terminal set scene pattern is cycling track scene mode; Terminal, through GPS location, determines that current geographic position is in a classroom, and terminal set scene pattern is classroom scene mode.
In step 304, the acoustic information of selection and matching scene modes from least one default acoustic information, as default acoustic information;
If terminal is pre-stored, there are a plurality of acoustic informations, this step can determine which of the acoustic information gathering in step 301 and at least one acoustic information pre-stored in terminal terminal mate.
Default acoustic information can be the built-in acoustic information of terminal self, can be also the acoustic information that user prerecords in terminal.
Such as, terminal provides several acoustic informations under cycling track scene mode and this cycling track scene mode, and each acoustic information is corresponding with a kind of engine sound.
Such as, user A1 is often used the own son's of terminal taking image or video, and user A1 can prerecord the acoustic information of own son A2 as default acoustic information.
Again such as, user B1 is often used the own teacher's of terminal taking teaching content, user B1 can prerecord the acoustic information of own teacher B2 as default acoustic information.
Table 1
Terminal is determined after current scene pattern, the default acoustic information that the acoustic information of selection and current scene pattern matching is used as this matching process from least one default acoustic information.
Such as, when in children's scene mode, the default acoustic information that terminal selects the acoustic information of son A2 to use as this matching process.
When in cycling track scene mode, terminal can show the user interface 32 that goes out as shown in Figure 3 C, receive the acoustic information " popular 2.0 engines " 34 that user selects in user interface 32, and the default acoustic information that this acoustic information 34 is used as this matching process.
When in classroom scene mode, the default acoustic information that terminal selects the acoustic information of teacher B2 to use as this matching process.
This step can realize the focusing to specific objective.Such as, in a group children, only want the child's focusing to oneself, pre-stored own child's acoustic information, during shooting mates the acoustic information collecting and the own child's who presets acoustic information.
In step 305, detect sound characteristic and whether mate with the sound characteristic of default acoustic information;
Whether terminal detects sound characteristic and mates with the sound characteristic of default acoustic information.
The voice recognition model that terminal can be set up acoustic information, carries out sound modeling according to the cepstrum feature of acoustic information.
Whether the present embodiment can pass through DTW (Dynamic Time Warping, dynamic time consolidation) algorithm and detect sound characteristic and mate with the sound characteristic of default acoustic information; By DTW algorithm, detect the voice recognition model of the acoustic information obtaining and whether the voice recognition model of default acoustic information mates.DTW algorithm full name is dynamic time consolidation algorithm, is a kind of non-linear consolidation algorithm that time consolidation and distance measurement calculations incorporated are got up.
Whether terminal, after having set up the voice recognition model of the acoustic information that obtains and default acoustic information, can detect two voice recognition models by DTW algorithm and mate.
It should be noted that, step 303, step 304 and step 305 are optional step.
In step 306, if sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of acoustic information;
If in the voice recognition model of the acoustic information obtaining and the voice recognition Model Matching of default acoustic information, analyze the sound source position of acoustic information.
The present embodiment can by two or more plane or space difference towards test point time difference of obtaining same acoustic information, judge sound source position.
Sound source position can comprise Sounnd source direction harmony spacing from.Sounnd source direction refers to that sound source is with respect to the position of terminal, and sound source distance refers to the distance between sound source and terminal.
This step can comprise following sub-step:
1, by two or more plane or space difference towards test point time difference of obtaining same acoustic information;
Because the locus of each test point is different, same acoustic information arrive time of each test point can be different, can life period poor between mutually.
2, poor the corresponding time of advent at different test points according to same acoustic information, the space length between each test point and delay inequality algorithm calculate this acoustic information with respect to the Sounnd source direction harmony spacing of terminal from.
Step 307, tentatively focuses to sound source position, obtains image information;
Terminal is carried out preliminary focusing by sound source position to sound producing body, and terminal can draw the distance between target object and terminal by going out sound source position, and focuses apart from adjusting camera lens by this.
Because sound source position may be within the scope of current camera lens, so this step can comprise following sub-step:
1, when sound source position is not within the scope of current camera lens, according to sound source position adjust camera lens towards and attitude;
Because sound source position may be positioned at the side of terminal or below, now, terminal can by inner mechanical structure adjust camera lens towards and attitude.
As shown in Figure 3 D, terminal comprises an automatically controlled runing rest 36.After getting the sound source position of sound producing body A, whether the sound source position that detects this sound producing body A is current viewfinder range, if this sound producing body A is not at current viewfinder range, terminal calculates the angle x of current shooting optical axis and sound producing body A present position, and this information is sent to runing rest 36, runing rest 36 makes current shooting optical axis overlap with sound producing body A through over-rotation, then tentatively focuses.
2, by the camera lens after adjusting, sound source position is tentatively focused, and obtain image information.
Terminal is obtained the image information of preliminary focusing, and the whole image information of directly obtaining when this image information can tentatively be focused for camera lens can be also the parts of images information of focusing area.
Step 308 is identified the target object at sound source position place in image information;
Terminal identifies the main body in image information by image recognition technology, using this main body as target object.
Step 309, whether detect target object is the sound producing body of acoustic information;
Due to the target object detecting, may be to be also not the real sound producing body of sound source, so terminal also needs whether target object in detected image information is the sound producing body of acoustic information.
As a kind of implementation, this step comprises following sub-step:
1, the image information associated with default acoustic information that inquiry sets in advance;
Such as, in step 305, the default acoustic information of coupling is teacher, the image information associated with this default acoustic information is this teacher's photo; The default acoustic information of coupling is child, and the image information associated with this default acoustic information is this child's photo; The default acoustic information of coupling is car engine sound, and the image information associated with this default acoustic information is the photo with the corresponding automobile of this car engine sound.
Whether the default image information that default acoustic information that 2, detection is mated with acoustic information is associated and the image information of target object mate;
If 3 couplings, determine that target object is the sound producing body of acoustic information.
The present embodiment can be used a kind of the most frequently used image matching method: frequency matching algorithm, the method by time-frequency conversion, is transformed to the data of frequency domain the data in territory, then by certain similitude degree, determines the match parameter between two width figure.The space-time transformation that the present embodiment adopts can be Fourier (Fourier) conversion, and the similarity measurement that the present embodiment adopts can be phase correlation amount.
Terminal can be by the image information of obtaining through Fourier conversion, be converted to the data of frequency domain, with phase correlation amount, detect again the matching degree of the image information of default image information and target object, here can set a threshold value, if testing result is less than this threshold value, think that presetting image information mates with the image information of target object, determine that this target object is sound producing body, if testing result is greater than this threshold value, think that presetting image information does not mate with the image information of target object.
Step 310, if target object is the sound producing body of acoustic information, focuses to target object automatically;
If default image information is mated with the image information of target object, target object is further focused, if do not mated, do not operate, and point out user.
Step 311, the acoustic information of continuous collecting target object;
Because target object may be moved, so if the definite target object of step 310 is the sound producing body of acoustic information, the acoustic information of terminal continuous collecting sound producing body.
Step 312, follows the tracks of focusing according to the acoustic information of continuous collecting to target object.
If the acoustic information of continuous collecting detects with default acoustic information and mates to 305 through step 302, continue target object to follow the tracks of focusing.
In sum, when subject can sounding, the present embodiment is focused to sound producing body by sound source position; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with remote controller or acoustic control mode, control the state of electronic equipment, above-mentioned Atomatic focusing method cannot be used, simultaneously, user's point touching screen also can bring the shake of electronic equipment, affects the problem of focus process; Reached and when the state of inconvenient operating electronic equipment, also can normally focus and focus process can not brought because of point touching screen the effect of equipment shake.
Whether the present embodiment is by sound source position is focused in advance, and analyze the image information that pre-focusing obtains and mate with the sound characteristic of the acoustic information obtaining, and increased the accuracy of the present embodiment when sound producing body is focused.
It should be added that, by the acoustic information getting is mated with default acoustic information, and only in coupling in the situation that, according to this acoustic information, focus.Can make the practicality of this method embodiment stronger, make terminal can comparatively noisy again environment in accurately to wanting the object of taking to focus, for example, while autodyning in noisy park, user can only focus terminal by sounding to oneself.
It should be added that, step 307 pair sound source position can rotate lens direction while tentatively focusing, make sound source position within the viewfinder range of camera lens, the angle of rotating can be determined by sound source position information, same, when step 312 pair target object is followed the tracks of focusing, also can be by rotating lens direction, make target object within the viewfinder range of camera lens, to reach the effect of better tracking focusing; This feature can make this method embodiment flexible Application in monitoring camera aspect, for example, when terminal is as CCTV camera, by this feature, can monitor accurately flexibly a very large panel region, there is certain intelligent monitoring effect, as long as collect the sound of guarded region, CCTV camera will turn to voice directions, and focuses and find a view, and has played the effect of monitoring; This feature also can make this method embodiment application and the track up aspect to moving object, for example, in cycling track, can settle multi-section to apply the video camera of this method embodiment at racing track periphery, scene mode is set as to cycling track scene, video camera can be followed the tracks of focusing to the racing car of process automatically, and take the demand of greatly having saved manpower.
It should be added that, when step 312 pair target object is followed the tracks of focusing, for improving the accuracy of following the tracks of focusing, can identify by step 307 to 310 pairs of sound producing bodies of following the tracks of focusing at any time, whether the target object of determining current tracking focusing is correct, if incorrect, from step 301, restart auto-focus process; This feature has improved the reliability that this method embodiment follows the tracks of focusing greatly; For example, in concert, can settle several video cameras of applying this method embodiment around at stage, when singer performs, these video cameras can be followed the tracks of focusing or take singer comparatively accurately, because concert Field Force is numerous, comparatively chaotic, there is mistake in general tracking focusing possibly, and this feature can improve the reliability of following the tracks of focusing greatly, guarantee to follow the tracks of the target of focusing is singer always.
Following is disclosure device embodiment, can be for carrying out disclosure embodiment of the method.Details for not disclosing in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 4 is according to the block diagram of a kind of automatic focusing mechanism shown in an exemplary embodiment.This automatic focusing mechanism can be by software, hardware or both be combined into all or part of of terminal.This automatic focusing mechanism, comprising:
Sound acquisition module 410, is configured in focus process, gathers the acoustic information of environment of living in;
Sound source position module 420, is configured to according to the sound source position of acoustic information analysis acoustic information;
Image collection module 430, is configured to the target object of sound source position automatically to focus.
In sum, when subject can sounding, the present embodiment is focused to sound producing body by sound source position; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with remote controller or acoustic control mode, control the state of electronic equipment, above-mentioned Atomatic focusing method cannot be used, simultaneously, user's point touching screen also can bring the shake of electronic equipment, affects the problem of focus process; Reached the effect that also can normally focus when the state of inconvenient operating electronic equipment.
Fig. 5 is according to the block diagram of a kind of automatic focusing mechanism shown in an exemplary embodiment.This automatic focusing mechanism can be by software, hardware or both be combined into all or part of of terminal.This automatic focusing mechanism, comprising:
Sound acquisition module 410, is configured in focus process, gathers the acoustic information of environment of living in;
Sound source position module 420, is configured to according to the sound source position of acoustic information analysis acoustic information;
Image collection module 430, is configured to the target object of sound source position automatically to focus.
Optionally, sound source position module 420, also comprises:
Sound resolution unit 421, characteristic detection unit 422 and sound source position unit 423;
Sound resolution unit 421, is configured to resolve acoustic information, obtains the sound characteristic of acoustic information;
Whether characteristic detection unit 422, be configured to detect sound characteristic and mate with default acoustic information;
Sound source position unit 423, is configured to when sound characteristic mates with the sound characteristic of default acoustic information, analyzes the sound source position of acoustic information.
Optionally, this device, also comprises:
Scene matching module 440, is configured to obtain the corresponding scene mode of environment of living in, and from least one default acoustic information, selects the acoustic information with matching scene modes, as default acoustic information.
Optionally, image collection module 430, also comprises: the unit 431 of tentatively focusing, image identification unit 432, image detecting element 433 and the unit 434 of automatically focusing;
Preliminary focusing unit 431, is configured to sound source position to carry out tentatively, to defocused, obtaining image information;
Image identification unit 432, is configured to identify the target object at sound source position place in image information;
Image detecting element 433, whether be configured to detect target object is the sound producing body of acoustic information;
Automatically focusing unit 434, is configured to, when described target object is the sound producing body of acoustic information, target object be focused automatically.
Optionally, described preliminary focusing unit 431, comprising:
Camera lens is adjusted subelement 431a, be configured to when described sound source position is not within the scope of current camera lens, according to described sound source position adjust described camera lens towards and attitude;
Preliminary focusing subelement 431b, is configured to by the described camera lens after adjusting, described sound source position tentatively be focused, and obtains image information.
Optionally, this device, also comprises: follow the tracks of Focusing module 450;
Follow the tracks of Focusing module 450, be configured to the acoustic information of continuous collecting target object, and according to the acoustic information of continuous collecting, target object followed the tracks of to focusing.
In sum, when subject can sounding, the present embodiment is focused to sound producing body by sound source position; Solved current touch Atomatic focusing method because needs user controls focusing by touch-screen, and cause when the state of user in inconvenient operating electronic equipment, such as the state of the hand-held flat-panel devices of both hands, with remote controller or acoustic control mode, control the state of electronic equipment, above-mentioned Atomatic focusing method cannot be used, simultaneously, user's point touching screen also can bring the shake of electronic equipment, affects the problem of focus process; Reached and when the state of inconvenient operating electronic equipment, also can normally focus and focus process can not brought because of point touching screen the effect of equipment shake.
Fig. 6 is according to the block diagram of a kind of automatic focusing mechanism 600 shown in an exemplary embodiment.For example, device 600 can be mobile phone, computer, digital broadcast terminal, information receiving and transmitting equipment, game console, flat-panel devices, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 6, device 600 can comprise following one or more assembly: processing components 602, memory 604, power supply module 606, multimedia groupware 608, audio-frequency assembly 610, the interface 612 of I/O (I/O), sensor cluster 614, and communications component 616.
The integrated operation of processing components 602 common control device 600, such as with demonstration, call, data communication, the operation that camera operation and record operation are associated.Processing components 602 can comprise that one or more processors 620 carry out instruction, to complete all or part of step of above-mentioned method.In addition, processing components 602 can comprise one or more modules, is convenient to mutual between processing components 602 and other assemblies.For example, processing components 602 can comprise multi-media module, to facilitate mutual between multimedia groupware 608 and processing components 602.
Memory 604 is configured to store various types of data to be supported in the operation of device 600.The example of these data comprises for any application program of operation on device 600 or the instruction of method, contact data, telephone book data, message, picture, video etc.Memory 604 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
Power supply module 606 provides electric power for installing 600 various assemblies.Power supply module 606 can comprise power-supply management system, one or more power supplys, and other and the assembly that generates, manages and distribute electric power to be associated for device 600.
Multimedia groupware 608 is included in the screen that an output interface is provided between described device 600 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises that one or more touch sensors are with the gesture on sensing touch, slip and touch panel.Described touch sensor is the border of sensing touch or sliding action not only, but also detects duration and the pressure relevant to described touch or slide.In certain embodiments, multimedia groupware 608 comprises a front-facing camera and/or post-positioned pick-up head.When device 600 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 610 is configured to output and/or input audio signal.For example, audio-frequency assembly 610 comprises a microphone (MIC), and when device 600 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal receiving can be further stored in memory 604 or be sent via communications component 616.In certain embodiments, audio-frequency assembly 610 also comprises a loud speaker, for output audio signal.
I/O interface 612 is for providing interface between processing components 602 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 614 comprises one or more transducers, is used to device 600 that the state estimation of various aspects is provided.For example, sensor cluster 614 can detect the opening/closing state of device 600, the relative positioning of assembly, for example described assembly is display and the keypad of device 600, the position of all right checkout gear 600 of sensor cluster 614 or 600 1 assemblies of device changes, user is with device 600 existence that contact or do not have the variations in temperature of device 600 orientation or acceleration/deceleration and device 600.Sensor cluster 614 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 614 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor cluster 614 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 616 is configured to be convenient to the communication of wired or wireless mode between device 600 and other equipment.Device 600 wireless networks that can access based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 616 receives broadcast singal or the broadcast related information from external broadcasting management system via broadcast channel.In one exemplary embodiment, described communications component 616 also comprises near-field communication (NFC) module, to promote junction service.For example, can be based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 600 can be realized by one or more application specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, the Atomatic focusing method providing for carrying out above-described embodiment.
In the exemplary embodiment, also provide a kind of non-provisional computer-readable recording medium that comprises instruction, for example, comprised the memory 604 of instruction, above-mentioned instruction can have been carried out said method by the processor 620 of device 600.For example, described non-provisional computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage equipment etc.
A non-provisional computer-readable recording medium, when the instruction in described storage medium is carried out by the processor that installs 600, makes device 600 can carry out the Atomatic focusing method that above-described embodiment provides.
Those skilled in the art, considering specification and putting into practice after invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised undocumented common practise or the conventional techniques means in the art of the disclosure.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various modifications and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (13)

1. an Atomatic focusing method, is characterized in that, described method comprises:
In focus process, gather the acoustic information of environment of living in;
According to described acoustic information, analyze the sound source position of described acoustic information;
Target object to described sound source position is focused automatically.
2. method according to claim 1, is characterized in that, the described sound source position of analyzing described acoustic information according to described acoustic information, comprising:
When described acoustic information is two or more, resolve each acoustic information, obtain the sound characteristic of described acoustic information;
Whether detect described sound characteristic mates with the sound characteristic of default acoustic information;
If described sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of described acoustic information.
3. method according to claim 2, is characterized in that, described method, also comprises:
Obtain the corresponding scene mode of environment of living in;
The acoustic information of selection and described matching scene modes from least one default acoustic information, as described default acoustic information.
4. according to the arbitrary described method of claims 1 to 3, it is characterized in that, the described target object to described sound source position is focused automatically, comprising:
Described sound source position is tentatively focused, obtain image information;
In described image information, identify the target object at described sound source position place;
Whether detect described target object is the sound producing body of described acoustic information;
If described target object is the sound producing body of described acoustic information, described target object is focused automatically.
5. method according to claim 4, is characterized in that, described described sound source position is tentatively focused, and obtains image information, comprising:
When described sound source position is not within the scope of current camera lens, according to described sound source position adjust described camera lens towards and attitude;
By the described camera lens after adjusting, described sound source position is tentatively focused, and obtain image information.
6. according to the arbitrary described method of claims 1 to 3, it is characterized in that, described method, also comprises:
The acoustic information of target object described in continuous collecting;
According to the described acoustic information of continuous collecting, described target object is followed the tracks of to focusing.
7. an automatic focusing mechanism, is characterized in that, described device comprises:
Sound acquisition module, is configured in focus process, gathers the acoustic information of environment of living in;
Sound source position module, is configured to analyze according to described acoustic information the sound source position of described acoustic information;
Image collection module, is configured to the target object of described sound source position automatically to focus.
8. device according to claim 7, is characterized in that, described sound source position module, also comprises:
Sound resolution unit, characteristic detection unit and sound source position unit;
Described sound resolution unit, is configured to, when described acoustic information is two or more, resolve each acoustic information, obtains the sound characteristic of described acoustic information;
Whether described characteristic detection unit, be configured to detect described sound characteristic and mate with default acoustic information;
Described sound source position unit, is configured to, when described sound characteristic mates with the sound characteristic of default acoustic information, analyze the sound source position of described acoustic information.
9. device according to claim 8, is characterized in that, described device, also comprises:
Scene matching module, is configured to obtain the corresponding scene mode of environment of living in, and from least one default acoustic information, selects the acoustic information with described matching scene modes, as described default acoustic information.
10. according to the arbitrary described device of claim 7 to 9, it is characterized in that, described image collection module, comprising: the unit of tentatively focusing, image identification unit, image detecting element and the unit of automatically focusing;
Described preliminary focusing unit, is configured to described sound source position tentatively to focus, and obtains image information;
Described image identification unit, is configured to identify the target object at described sound source position place in described image information;
Described image detecting element, whether be configured to detect described target object is the sound producing body of described acoustic information;
Described automatic focusing unit, is configured to, when described target object is the sound producing body of described acoustic information, described target object be focused automatically.
11. devices according to claim 10, is characterized in that, described preliminary focusing unit, comprising:
Camera lens is adjusted subelement, be configured to when described sound source position is not within the scope of current camera lens, according to described sound source position adjust described camera lens towards and attitude;
Preliminary focusing subelement, is configured to by the described camera lens after adjusting, described sound source position tentatively be focused, and obtains image information.
12. according to the arbitrary described device of claim 7 to 9, it is characterized in that, described device, also comprises: follow the tracks of Focusing module;
Described tracking Focusing module, is configured to the acoustic information of target object described in continuous collecting, and according to the described acoustic information of continuous collecting, described target object is followed the tracks of to focusing.
13. 1 kinds of automatic focusing mechanisms, is characterized in that, comprising:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
In focus process, gather the acoustic information of environment of living in;
According to described acoustic information, analyze the sound source position of described acoustic information;
Target object to described sound source position is focused automatically.
CN201410261049.2A 2014-06-12 2014-06-12 Atomatic focusing method and device Active CN104092936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410261049.2A CN104092936B (en) 2014-06-12 2014-06-12 Atomatic focusing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410261049.2A CN104092936B (en) 2014-06-12 2014-06-12 Atomatic focusing method and device

Publications (2)

Publication Number Publication Date
CN104092936A true CN104092936A (en) 2014-10-08
CN104092936B CN104092936B (en) 2017-01-04

Family

ID=51640616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410261049.2A Active CN104092936B (en) 2014-06-12 2014-06-12 Atomatic focusing method and device

Country Status (1)

Country Link
CN (1) CN104092936B (en)

Cited By (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104883524A (en) * 2015-06-02 2015-09-02 阔地教育科技有限公司 Method and system for automatically tracking and shooting moving object in online class
CN104954673A (en) * 2015-06-11 2015-09-30 广东欧珀移动通信有限公司 Camera rotating control method and user terminal
CN104967771A (en) * 2015-04-30 2015-10-07 广东欧珀移动通信有限公司 Method of controlling camera and mobile terminal
CN105208283A (en) * 2015-10-13 2015-12-30 广东欧珀移动通信有限公司 Soundsnap method and device
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN105611167A (en) * 2015-12-30 2016-05-25 联想(北京)有限公司 Focusing plane adjusting method and electronic device
CN105657253A (en) * 2015-12-28 2016-06-08 联想(北京)有限公司 Focusing method and electronic device
WO2016097887A1 (en) * 2014-12-19 2016-06-23 Sony Corporation Image forming method and apparatus and electronic device
WO2016110012A1 (en) * 2015-01-05 2016-07-14 中兴通讯股份有限公司 Focus region selection method and apparatus, and computer-readable storage medium
CN105791674A (en) * 2016-02-05 2016-07-20 联想(北京)有限公司 Electronic device and focusing method
WO2016131361A1 (en) * 2015-07-29 2016-08-25 中兴通讯股份有限公司 Monitoring system and method
CN106155050A (en) * 2015-04-15 2016-11-23 小米科技有限责任公司 The mode of operation method of adjustment of intelligent cleaning equipment and device, electronic equipment
CN106385540A (en) * 2016-09-26 2017-02-08 珠海格力电器股份有限公司 Focal length control method, device and system and mobile equipment
CN107018306A (en) * 2015-09-09 2017-08-04 美商富迪科技股份有限公司 Electronic installation
CN107800967A (en) * 2017-10-30 2018-03-13 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108494465A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of wave beam adjustment method and mobile terminal of smart antenna
CN108989683A (en) * 2018-08-20 2018-12-11 崔跃 A kind of automatic shooting system for children
CN109194918A (en) * 2018-09-17 2019-01-11 东莞市丰展电子科技有限公司 A kind of camera system based on mobile vehicle
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN109997370A (en) * 2016-09-30 2019-07-09 搜诺思公司 More orientation playback apparatus microphones
CN110351476A (en) * 2018-04-03 2019-10-18 佳能株式会社 Picture pick-up device and non-transitory recording medium
CN110428850A (en) * 2019-08-02 2019-11-08 深圳市无限动力发展有限公司 Voice pick-up method, device, storage medium and mobile robot
CN110874909A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Monitoring method, system and readable storage medium
US10606555B1 (en) 2017-09-29 2020-03-31 Sonos, Inc. Media playback system with concurrent voice assistance
US10614807B2 (en) 2016-10-19 2020-04-07 Sonos, Inc. Arbitration-based voice recognition
US10692518B2 (en) 2018-09-29 2020-06-23 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US10714115B2 (en) 2016-06-09 2020-07-14 Sonos, Inc. Dynamic player selection for audio signal processing
US10743101B2 (en) 2016-02-22 2020-08-11 Sonos, Inc. Content mixing
US10811015B2 (en) 2018-09-25 2020-10-20 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US10847164B2 (en) 2016-08-05 2020-11-24 Sonos, Inc. Playback device supporting concurrent voice assistants
US10847178B2 (en) 2018-05-18 2020-11-24 Sonos, Inc. Linear filtering for noise-suppressed speech detection
US10847143B2 (en) 2016-02-22 2020-11-24 Sonos, Inc. Voice control of a media playback system
CN112073639A (en) * 2020-09-11 2020-12-11 Oppo(重庆)智能科技有限公司 Shooting control method and device, computer readable medium and electronic equipment
US10871943B1 (en) 2019-07-31 2020-12-22 Sonos, Inc. Noise classification for event detection
US10880644B1 (en) 2017-09-28 2020-12-29 Sonos, Inc. Three-dimensional beam forming with a microphone array
US10878811B2 (en) 2018-09-14 2020-12-29 Sonos, Inc. Networked devices, systems, and methods for intelligently deactivating wake-word engines
US10880650B2 (en) 2017-12-10 2020-12-29 Sonos, Inc. Network microphone devices with automatic do not disturb actuation capabilities
US10891932B2 (en) 2017-09-28 2021-01-12 Sonos, Inc. Multi-channel acoustic echo cancellation
US10959029B2 (en) 2018-05-25 2021-03-23 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
CN112565598A (en) * 2020-11-26 2021-03-26 Oppo广东移动通信有限公司 Focusing method and apparatus, terminal, computer-readable storage medium, and electronic device
US10970035B2 (en) 2016-02-22 2021-04-06 Sonos, Inc. Audio response playback
US11017789B2 (en) 2017-09-27 2021-05-25 Sonos, Inc. Robust Short-Time Fourier Transform acoustic echo cancellation during audio playback
US11024331B2 (en) 2018-09-21 2021-06-01 Sonos, Inc. Voice detection optimization using sound metadata
US11042355B2 (en) 2016-02-22 2021-06-22 Sonos, Inc. Handling of loss of pairing between networked devices
US11076035B2 (en) 2018-08-28 2021-07-27 Sonos, Inc. Do not disturb feature for audio notifications
US11080005B2 (en) 2017-09-08 2021-08-03 Sonos, Inc. Dynamic computation of system response volume
US11100923B2 (en) 2018-09-28 2021-08-24 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11102389B2 (en) 2017-09-28 2021-08-24 Canon Kabushiki Kaisha Image pickup apparatus and control method therefor
US11132989B2 (en) 2018-12-13 2021-09-28 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US11138969B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11138975B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11159880B2 (en) 2018-12-20 2021-10-26 Sonos, Inc. Optimization of network microphone devices using noise classification
US11175880B2 (en) 2018-05-10 2021-11-16 Sonos, Inc. Systems and methods for voice-assisted media content selection
US11183183B2 (en) 2018-12-07 2021-11-23 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11184969B2 (en) 2016-07-15 2021-11-23 Sonos, Inc. Contextualization of voice inputs
US11183181B2 (en) 2017-03-27 2021-11-23 Sonos, Inc. Systems and methods of multiple voice services
US11189286B2 (en) 2019-10-22 2021-11-30 Sonos, Inc. VAS toggle based on device orientation
US11197096B2 (en) 2018-06-28 2021-12-07 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US11200894B2 (en) 2019-06-12 2021-12-14 Sonos, Inc. Network microphone device with command keyword eventing
US11200900B2 (en) 2019-12-20 2021-12-14 Sonos, Inc. Offline voice control
US11200889B2 (en) 2018-11-15 2021-12-14 Sonos, Inc. Dilated convolutions and gating for efficient keyword spotting
CN113840087A (en) * 2021-09-09 2021-12-24 Oppo广东移动通信有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium
US11302326B2 (en) 2017-09-28 2022-04-12 Sonos, Inc. Tone interference cancellation
US11308958B2 (en) 2020-02-07 2022-04-19 Sonos, Inc. Localized wakeword verification
US11308962B2 (en) 2020-05-20 2022-04-19 Sonos, Inc. Input detection windowing
US11315556B2 (en) 2019-02-08 2022-04-26 Sonos, Inc. Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
US11343614B2 (en) 2018-01-31 2022-05-24 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11361756B2 (en) 2019-06-12 2022-06-14 Sonos, Inc. Conditional wake word eventing based on environment
US11380322B2 (en) 2017-08-07 2022-07-05 Sonos, Inc. Wake-word detection suppression
US11405430B2 (en) 2016-02-22 2022-08-02 Sonos, Inc. Networked microphone device control
US11432030B2 (en) 2018-09-14 2022-08-30 Sonos, Inc. Networked devices, systems, and methods for associating playback devices based on sound codes
US11482978B2 (en) 2018-08-28 2022-10-25 Sonos, Inc. Audio notifications
US11482224B2 (en) 2020-05-20 2022-10-25 Sonos, Inc. Command keywords with input detection windowing
US11501773B2 (en) 2019-06-12 2022-11-15 Sonos, Inc. Network microphone device with command keyword conditioning
US11551700B2 (en) 2021-01-25 2023-01-10 Sonos, Inc. Systems and methods for power-efficient keyword detection
US11556307B2 (en) 2020-01-31 2023-01-17 Sonos, Inc. Local voice data processing
US11556306B2 (en) 2016-02-22 2023-01-17 Sonos, Inc. Voice controlled media playback system
US11562740B2 (en) 2020-01-07 2023-01-24 Sonos, Inc. Voice verification for media playback
US11641559B2 (en) 2016-09-27 2023-05-02 Sonos, Inc. Audio playback settings for voice interaction
US11646023B2 (en) 2019-02-08 2023-05-09 Sonos, Inc. Devices, systems, and methods for distributed voice processing
US11664023B2 (en) 2016-07-15 2023-05-30 Sonos, Inc. Voice detection by multiple devices
US11676590B2 (en) 2017-12-11 2023-06-13 Sonos, Inc. Home graph
US11698771B2 (en) 2020-08-25 2023-07-11 Sonos, Inc. Vocal guidance engines for playback devices
US11727919B2 (en) 2020-05-20 2023-08-15 Sonos, Inc. Memory allocation for keyword spotting engines
US11798553B2 (en) 2019-05-03 2023-10-24 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US11899519B2 (en) 2018-10-23 2024-02-13 Sonos, Inc. Multiple stage network microphone device with reduced power consumption and processing load
US11984123B2 (en) 2020-11-12 2024-05-14 Sonos, Inc. Network device interaction by range

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068308A (en) * 2007-05-10 2007-11-07 华为技术有限公司 System and method for controlling image collector to make target positioning
CN101345668A (en) * 2008-08-22 2009-01-14 中兴通讯股份有限公司 Control method and apparatus for monitoring equipment
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN101770139A (en) * 2008-12-29 2010-07-07 鸿富锦精密工业(深圳)有限公司 Focusing control system and method
CN102413276A (en) * 2010-09-21 2012-04-11 天津三星光电子有限公司 Digital video camera having sound-controlled focusing function
CN103051838A (en) * 2012-12-25 2013-04-17 广东欧珀移动通信有限公司 Shoot control method and device
CN103841357A (en) * 2012-11-21 2014-06-04 中兴通讯股份有限公司 Microphone array sound source positioning method, device and system based on video tracking
CN103957359A (en) * 2014-05-15 2014-07-30 深圳市中兴移动通信有限公司 Camera shooting device and focusing method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068308A (en) * 2007-05-10 2007-11-07 华为技术有限公司 System and method for controlling image collector to make target positioning
CN101345668A (en) * 2008-08-22 2009-01-14 中兴通讯股份有限公司 Control method and apparatus for monitoring equipment
CN101770139A (en) * 2008-12-29 2010-07-07 鸿富锦精密工业(深圳)有限公司 Focusing control system and method
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN102413276A (en) * 2010-09-21 2012-04-11 天津三星光电子有限公司 Digital video camera having sound-controlled focusing function
CN103841357A (en) * 2012-11-21 2014-06-04 中兴通讯股份有限公司 Microphone array sound source positioning method, device and system based on video tracking
CN103051838A (en) * 2012-12-25 2013-04-17 广东欧珀移动通信有限公司 Shoot control method and device
CN103957359A (en) * 2014-05-15 2014-07-30 深圳市中兴移动通信有限公司 Camera shooting device and focusing method thereof

Cited By (157)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016097887A1 (en) * 2014-12-19 2016-06-23 Sony Corporation Image forming method and apparatus and electronic device
CN105763787A (en) * 2014-12-19 2016-07-13 索尼公司 Image forming method, device and electric device
CN105827928A (en) * 2015-01-05 2016-08-03 中兴通讯股份有限公司 Focusing area selection method and focusing area selection device
WO2016110012A1 (en) * 2015-01-05 2016-07-14 中兴通讯股份有限公司 Focus region selection method and apparatus, and computer-readable storage medium
CN106155050A (en) * 2015-04-15 2016-11-23 小米科技有限责任公司 The mode of operation method of adjustment of intelligent cleaning equipment and device, electronic equipment
CN104967771A (en) * 2015-04-30 2015-10-07 广东欧珀移动通信有限公司 Method of controlling camera and mobile terminal
CN104883524A (en) * 2015-06-02 2015-09-02 阔地教育科技有限公司 Method and system for automatically tracking and shooting moving object in online class
CN104883524B (en) * 2015-06-02 2018-09-11 阔地教育科技有限公司 Moving target automatic tracking image pickup method and system in a kind of Online class
CN104954673A (en) * 2015-06-11 2015-09-30 广东欧珀移动通信有限公司 Camera rotating control method and user terminal
CN104954673B (en) * 2015-06-11 2018-01-19 广东欧珀移动通信有限公司 A kind of camera method of controlling rotation and user terminal
WO2016131361A1 (en) * 2015-07-29 2016-08-25 中兴通讯股份有限公司 Monitoring system and method
CN106412488A (en) * 2015-07-29 2017-02-15 中兴通讯股份有限公司 Monitoring system and method
CN107018306A (en) * 2015-09-09 2017-08-04 美商富迪科技股份有限公司 Electronic installation
CN105208283A (en) * 2015-10-13 2015-12-30 广东欧珀移动通信有限公司 Soundsnap method and device
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN105657253A (en) * 2015-12-28 2016-06-08 联想(北京)有限公司 Focusing method and electronic device
CN105657253B (en) * 2015-12-28 2019-03-29 联想(北京)有限公司 A kind of focusing method and electronic equipment
CN105611167A (en) * 2015-12-30 2016-05-25 联想(北京)有限公司 Focusing plane adjusting method and electronic device
CN105791674A (en) * 2016-02-05 2016-07-20 联想(北京)有限公司 Electronic device and focusing method
CN105791674B (en) * 2016-02-05 2019-06-25 联想(北京)有限公司 Electronic equipment and focusing method
US11726742B2 (en) 2016-02-22 2023-08-15 Sonos, Inc. Handling of loss of pairing between networked devices
US11736860B2 (en) 2016-02-22 2023-08-22 Sonos, Inc. Voice control of a media playback system
US11042355B2 (en) 2016-02-22 2021-06-22 Sonos, Inc. Handling of loss of pairing between networked devices
US11405430B2 (en) 2016-02-22 2022-08-02 Sonos, Inc. Networked microphone device control
US11983463B2 (en) 2016-02-22 2024-05-14 Sonos, Inc. Metadata exchange involving a networked playback system and a networked microphone system
US11212612B2 (en) 2016-02-22 2021-12-28 Sonos, Inc. Voice control of a media playback system
US11184704B2 (en) 2016-02-22 2021-11-23 Sonos, Inc. Music service selection
US11513763B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Audio response playback
US11556306B2 (en) 2016-02-22 2023-01-17 Sonos, Inc. Voice controlled media playback system
US11006214B2 (en) 2016-02-22 2021-05-11 Sonos, Inc. Default playback device designation
US12047752B2 (en) 2016-02-22 2024-07-23 Sonos, Inc. Content mixing
US11514898B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Voice control of a media playback system
US10971139B2 (en) 2016-02-22 2021-04-06 Sonos, Inc. Voice control of a media playback system
US11832068B2 (en) 2016-02-22 2023-11-28 Sonos, Inc. Music service selection
US10743101B2 (en) 2016-02-22 2020-08-11 Sonos, Inc. Content mixing
US10764679B2 (en) 2016-02-22 2020-09-01 Sonos, Inc. Voice control of a media playback system
US11863593B2 (en) 2016-02-22 2024-01-02 Sonos, Inc. Networked microphone device control
US10970035B2 (en) 2016-02-22 2021-04-06 Sonos, Inc. Audio response playback
US11750969B2 (en) 2016-02-22 2023-09-05 Sonos, Inc. Default playback device designation
US10847143B2 (en) 2016-02-22 2020-11-24 Sonos, Inc. Voice control of a media playback system
US10714115B2 (en) 2016-06-09 2020-07-14 Sonos, Inc. Dynamic player selection for audio signal processing
US11133018B2 (en) 2016-06-09 2021-09-28 Sonos, Inc. Dynamic player selection for audio signal processing
US11545169B2 (en) 2016-06-09 2023-01-03 Sonos, Inc. Dynamic player selection for audio signal processing
US11664023B2 (en) 2016-07-15 2023-05-30 Sonos, Inc. Voice detection by multiple devices
US11184969B2 (en) 2016-07-15 2021-11-23 Sonos, Inc. Contextualization of voice inputs
US11979960B2 (en) 2016-07-15 2024-05-07 Sonos, Inc. Contextualization of voice inputs
US11531520B2 (en) 2016-08-05 2022-12-20 Sonos, Inc. Playback device supporting concurrent voice assistants
US10847164B2 (en) 2016-08-05 2020-11-24 Sonos, Inc. Playback device supporting concurrent voice assistants
CN106385540A (en) * 2016-09-26 2017-02-08 珠海格力电器股份有限公司 Focal length control method, device and system and mobile equipment
US11641559B2 (en) 2016-09-27 2023-05-02 Sonos, Inc. Audio playback settings for voice interaction
US11516610B2 (en) 2016-09-30 2022-11-29 Sonos, Inc. Orientation-based playback device microphone selection
CN109997370B (en) * 2016-09-30 2021-03-02 搜诺思公司 Multi-orientation playback device microphone
US10873819B2 (en) 2016-09-30 2020-12-22 Sonos, Inc. Orientation-based playback device microphone selection
CN109997370A (en) * 2016-09-30 2019-07-09 搜诺思公司 More orientation playback apparatus microphones
US11727933B2 (en) 2016-10-19 2023-08-15 Sonos, Inc. Arbitration-based voice recognition
US11308961B2 (en) 2016-10-19 2022-04-19 Sonos, Inc. Arbitration-based voice recognition
US10614807B2 (en) 2016-10-19 2020-04-07 Sonos, Inc. Arbitration-based voice recognition
US11183181B2 (en) 2017-03-27 2021-11-23 Sonos, Inc. Systems and methods of multiple voice services
US11380322B2 (en) 2017-08-07 2022-07-05 Sonos, Inc. Wake-word detection suppression
US11900937B2 (en) 2017-08-07 2024-02-13 Sonos, Inc. Wake-word detection suppression
US11080005B2 (en) 2017-09-08 2021-08-03 Sonos, Inc. Dynamic computation of system response volume
US11500611B2 (en) 2017-09-08 2022-11-15 Sonos, Inc. Dynamic computation of system response volume
US11646045B2 (en) 2017-09-27 2023-05-09 Sonos, Inc. Robust short-time fourier transform acoustic echo cancellation during audio playback
US11017789B2 (en) 2017-09-27 2021-05-25 Sonos, Inc. Robust Short-Time Fourier Transform acoustic echo cancellation during audio playback
US11769505B2 (en) 2017-09-28 2023-09-26 Sonos, Inc. Echo of tone interferance cancellation using two acoustic echo cancellers
US12047753B1 (en) 2017-09-28 2024-07-23 Sonos, Inc. Three-dimensional beam forming with a microphone array
US11102389B2 (en) 2017-09-28 2021-08-24 Canon Kabushiki Kaisha Image pickup apparatus and control method therefor
US11302326B2 (en) 2017-09-28 2022-04-12 Sonos, Inc. Tone interference cancellation
US10891932B2 (en) 2017-09-28 2021-01-12 Sonos, Inc. Multi-channel acoustic echo cancellation
US11538451B2 (en) 2017-09-28 2022-12-27 Sonos, Inc. Multi-channel acoustic echo cancellation
US10880644B1 (en) 2017-09-28 2020-12-29 Sonos, Inc. Three-dimensional beam forming with a microphone array
US11288039B2 (en) 2017-09-29 2022-03-29 Sonos, Inc. Media playback system with concurrent voice assistance
US11175888B2 (en) 2017-09-29 2021-11-16 Sonos, Inc. Media playback system with concurrent voice assistance
US11893308B2 (en) 2017-09-29 2024-02-06 Sonos, Inc. Media playback system with concurrent voice assistance
US10606555B1 (en) 2017-09-29 2020-03-31 Sonos, Inc. Media playback system with concurrent voice assistance
CN107800967A (en) * 2017-10-30 2018-03-13 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
US11451908B2 (en) 2017-12-10 2022-09-20 Sonos, Inc. Network microphone devices with automatic do not disturb actuation capabilities
US10880650B2 (en) 2017-12-10 2020-12-29 Sonos, Inc. Network microphone devices with automatic do not disturb actuation capabilities
US11676590B2 (en) 2017-12-11 2023-06-13 Sonos, Inc. Home graph
US11343614B2 (en) 2018-01-31 2022-05-24 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11689858B2 (en) 2018-01-31 2023-06-27 Sonos, Inc. Device designation of playback and network microphone device arrangements
CN108494465A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of wave beam adjustment method and mobile terminal of smart antenna
CN110351476A (en) * 2018-04-03 2019-10-18 佳能株式会社 Picture pick-up device and non-transitory recording medium
US11265477B2 (en) 2018-04-03 2022-03-01 Canon Kabushiki Kaisha Image capturing apparatus and non-transitory recording medium
CN110351476B (en) * 2018-04-03 2021-07-13 佳能株式会社 Image pickup apparatus and non-transitory recording medium
US11797263B2 (en) 2018-05-10 2023-10-24 Sonos, Inc. Systems and methods for voice-assisted media content selection
US11175880B2 (en) 2018-05-10 2021-11-16 Sonos, Inc. Systems and methods for voice-assisted media content selection
US10847178B2 (en) 2018-05-18 2020-11-24 Sonos, Inc. Linear filtering for noise-suppressed speech detection
US11715489B2 (en) 2018-05-18 2023-08-01 Sonos, Inc. Linear filtering for noise-suppressed speech detection
US10959029B2 (en) 2018-05-25 2021-03-23 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
US11792590B2 (en) 2018-05-25 2023-10-17 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
US11696074B2 (en) 2018-06-28 2023-07-04 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US11197096B2 (en) 2018-06-28 2021-12-07 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
CN108989683A (en) * 2018-08-20 2018-12-11 崔跃 A kind of automatic shooting system for children
US11563842B2 (en) 2018-08-28 2023-01-24 Sonos, Inc. Do not disturb feature for audio notifications
US11076035B2 (en) 2018-08-28 2021-07-27 Sonos, Inc. Do not disturb feature for audio notifications
US11482978B2 (en) 2018-08-28 2022-10-25 Sonos, Inc. Audio notifications
CN110874909A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Monitoring method, system and readable storage medium
US11432030B2 (en) 2018-09-14 2022-08-30 Sonos, Inc. Networked devices, systems, and methods for associating playback devices based on sound codes
US10878811B2 (en) 2018-09-14 2020-12-29 Sonos, Inc. Networked devices, systems, and methods for intelligently deactivating wake-word engines
US11778259B2 (en) 2018-09-14 2023-10-03 Sonos, Inc. Networked devices, systems and methods for associating playback devices based on sound codes
US11551690B2 (en) 2018-09-14 2023-01-10 Sonos, Inc. Networked devices, systems, and methods for intelligently deactivating wake-word engines
CN109194918B (en) * 2018-09-17 2022-04-19 东莞市丰展电子科技有限公司 Shooting system based on mobile carrier
CN109194918A (en) * 2018-09-17 2019-01-11 东莞市丰展电子科技有限公司 A kind of camera system based on mobile vehicle
US11024331B2 (en) 2018-09-21 2021-06-01 Sonos, Inc. Voice detection optimization using sound metadata
US11790937B2 (en) 2018-09-21 2023-10-17 Sonos, Inc. Voice detection optimization using sound metadata
US11031014B2 (en) 2018-09-25 2021-06-08 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US11727936B2 (en) 2018-09-25 2023-08-15 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US10811015B2 (en) 2018-09-25 2020-10-20 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US11790911B2 (en) 2018-09-28 2023-10-17 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11100923B2 (en) 2018-09-28 2021-08-24 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US12062383B2 (en) 2018-09-29 2024-08-13 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US11501795B2 (en) 2018-09-29 2022-11-15 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US10692518B2 (en) 2018-09-29 2020-06-23 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US11899519B2 (en) 2018-10-23 2024-02-13 Sonos, Inc. Multiple stage network microphone device with reduced power consumption and processing load
US11200889B2 (en) 2018-11-15 2021-12-14 Sonos, Inc. Dilated convolutions and gating for efficient keyword spotting
US11741948B2 (en) 2018-11-15 2023-08-29 Sonos Vox France Sas Dilated convolutions and gating for efficient keyword spotting
US11557294B2 (en) 2018-12-07 2023-01-17 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11183183B2 (en) 2018-12-07 2021-11-23 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11132989B2 (en) 2018-12-13 2021-09-28 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US11538460B2 (en) 2018-12-13 2022-12-27 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
US11540047B2 (en) 2018-12-20 2022-12-27 Sonos, Inc. Optimization of network microphone devices using noise classification
US11159880B2 (en) 2018-12-20 2021-10-26 Sonos, Inc. Optimization of network microphone devices using noise classification
US11315556B2 (en) 2019-02-08 2022-04-26 Sonos, Inc. Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
US11646023B2 (en) 2019-02-08 2023-05-09 Sonos, Inc. Devices, systems, and methods for distributed voice processing
US11798553B2 (en) 2019-05-03 2023-10-24 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US11854547B2 (en) 2019-06-12 2023-12-26 Sonos, Inc. Network microphone device with command keyword eventing
US11361756B2 (en) 2019-06-12 2022-06-14 Sonos, Inc. Conditional wake word eventing based on environment
US11200894B2 (en) 2019-06-12 2021-12-14 Sonos, Inc. Network microphone device with command keyword eventing
US11501773B2 (en) 2019-06-12 2022-11-15 Sonos, Inc. Network microphone device with command keyword conditioning
US11138969B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US10871943B1 (en) 2019-07-31 2020-12-22 Sonos, Inc. Noise classification for event detection
US11710487B2 (en) 2019-07-31 2023-07-25 Sonos, Inc. Locally distributed keyword detection
US11551669B2 (en) 2019-07-31 2023-01-10 Sonos, Inc. Locally distributed keyword detection
US11354092B2 (en) 2019-07-31 2022-06-07 Sonos, Inc. Noise classification for event detection
US11138975B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11714600B2 (en) 2019-07-31 2023-08-01 Sonos, Inc. Noise classification for event detection
CN110428850A (en) * 2019-08-02 2019-11-08 深圳市无限动力发展有限公司 Voice pick-up method, device, storage medium and mobile robot
US11189286B2 (en) 2019-10-22 2021-11-30 Sonos, Inc. VAS toggle based on device orientation
US11862161B2 (en) 2019-10-22 2024-01-02 Sonos, Inc. VAS toggle based on device orientation
US11200900B2 (en) 2019-12-20 2021-12-14 Sonos, Inc. Offline voice control
US11869503B2 (en) 2019-12-20 2024-01-09 Sonos, Inc. Offline voice control
US11562740B2 (en) 2020-01-07 2023-01-24 Sonos, Inc. Voice verification for media playback
US11556307B2 (en) 2020-01-31 2023-01-17 Sonos, Inc. Local voice data processing
US11961519B2 (en) 2020-02-07 2024-04-16 Sonos, Inc. Localized wakeword verification
US11308958B2 (en) 2020-02-07 2022-04-19 Sonos, Inc. Localized wakeword verification
US11308962B2 (en) 2020-05-20 2022-04-19 Sonos, Inc. Input detection windowing
US11482224B2 (en) 2020-05-20 2022-10-25 Sonos, Inc. Command keywords with input detection windowing
US11694689B2 (en) 2020-05-20 2023-07-04 Sonos, Inc. Input detection windowing
US11727919B2 (en) 2020-05-20 2023-08-15 Sonos, Inc. Memory allocation for keyword spotting engines
US11698771B2 (en) 2020-08-25 2023-07-11 Sonos, Inc. Vocal guidance engines for playback devices
CN112073639A (en) * 2020-09-11 2020-12-11 Oppo(重庆)智能科技有限公司 Shooting control method and device, computer readable medium and electronic equipment
US11984123B2 (en) 2020-11-12 2024-05-14 Sonos, Inc. Network device interaction by range
CN112565598A (en) * 2020-11-26 2021-03-26 Oppo广东移动通信有限公司 Focusing method and apparatus, terminal, computer-readable storage medium, and electronic device
US11551700B2 (en) 2021-01-25 2023-01-10 Sonos, Inc. Systems and methods for power-efficient keyword detection
CN113840087A (en) * 2021-09-09 2021-12-24 Oppo广东移动通信有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN104092936B (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN104092936A (en) Automatic focusing method and apparatus
CN105828201B (en) Method for processing video frequency and device
RU2647093C2 (en) Speech control method and apparatus for smart device, control device and smart device
CN105791958A (en) Method and device for live broadcasting game
WO2020103548A1 (en) Video synthesis method and device, and terminal and storage medium
CN111641794B (en) Sound signal acquisition method and electronic equipment
CN107515925A (en) Method for playing music and device
CN103916711A (en) Method and device for playing video signals
CN105120191A (en) Video recording method and device
CN106303187B (en) Acquisition method, device and the terminal of voice messaging
CN104038827A (en) Multimedia playing method and device
CN104112129A (en) Image identification method and apparatus
CN106231378A (en) The display packing of direct broadcasting room, Apparatus and system
CN106331761A (en) Live broadcast list display method and apparatuses
CN105487863A (en) Interface setting method and device based on scene
CN105843503B (en) Using open method, device and terminal device
WO2017181545A1 (en) Object monitoring method and device
CN103986999A (en) Method, device and terminal equipment for detecting earphone impedance
CN105406882A (en) Terminal equipment control method and device
CN105959587A (en) Shutter speed acquisition method and device
CN108108671A (en) Description of product information acquisition method and device
CN106303198A (en) Photographing information acquisition methods and device
CN104243829A (en) Self-shooting method and self-shooting device
CN104156993A (en) Method and device for switching face image in picture
CN103955274A (en) Application control method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant