CN103458220A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN103458220A
CN103458220A CN2012101812835A CN201210181283A CN103458220A CN 103458220 A CN103458220 A CN 103458220A CN 2012101812835 A CN2012101812835 A CN 2012101812835A CN 201210181283 A CN201210181283 A CN 201210181283A CN 103458220 A CN103458220 A CN 103458220A
Authority
CN
China
Prior art keywords
microphone
sub
time point
audio signal
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012101812835A
Other languages
Chinese (zh)
Inventor
郁凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN2012101812835A priority Critical patent/CN103458220A/en
Publication of CN103458220A publication Critical patent/CN103458220A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides an image processing method and electronic equipment. The image processing method is applied to the electronic equipment. The image processing method comprises the steps that a first sound signal and a first image with a sound source corresponding to the first sound signal are acquired at the first moment; the area corresponding to the position of the sound source on the first image is determined as a first area, the area besides the first area is a second area, and image data in the second area are reduced.

Description

A kind of image processing method and electronic equipment
Technical field
The present invention relates to communication technical field, relate in particular to a kind of image processing method and electronic equipment.
Background technology
The significantly raising with the resolution of camera head that improves constantly along with network technique, the future development that telecommunicate between people and people also exchanges from speech exchange toward video gradually, for example now increasing user selects video conference, so both can reach the effect exchanged face to face, economized again road fatigued, importantly saved the time.
But, though network technique has had raising, when running into the situation of low network speed, high-resolution camera, just can only improve by the compression algorithm of software and hardware image compression ratio or reduce the number of image frames networking bandwidth of condescending to take this post.
Yet, the inventor finds in realizing process of the present invention, in the prior art, because utilize complicated algorithm to improve the compression ratio of image, so local or long-range electronic equipment deals with the meeting time consumption and energy consumption, and the scheme that reduces frame number can directly cause image quality decrease, can't guarantee that the video information needed can be transferred to the other side.
Summary of the invention
The invention provides a kind of image processing method and electronic equipment, in order to solve the problem of inconvenient transmitting image under the low network speed that exists in prior art, low bandwidth or video data.
One aspect of the present invention provides a kind of image processing method, is applied to an electronic equipment, and described method comprises: first, constantly obtain the first audio signal and comprise the first image with the corresponding sound source of described the first audio signal; Zone corresponding to position of determining described sound source on described the first image is first area, and the zone outside described first area is second area; Reduce the view data of described second area.
Preferably, described electronic equipment comprises the first microphone and second microphone, is arranged at respectively the first corner and second corner of the display of described electronic equipment, and wherein, described the first corner is relative with described the second corner; Describedly determine that on described the first image the zone corresponding to position of described sound source is first area, specifically comprise: described the first microphone is in described the first audio signal of the first sub-time point typing; Described second microphone is in described the first audio signal of the second sub-time point typing; The time difference based on the described first sub-time point and described the second sub-time point is determined the position of described sound source; The described first area of location positioning based on described sound source.
Preferably, describedly constantly obtain the first audio signal first and be specially: described described the first audio signal in the first sub-time point typing is mixed in described first moment with described described the first audio signal in the second sub-time point typing.
Preferably, described electronic equipment comprises the first microphone, second microphone, the 3rd microphone and the 4th microphone, be arranged at respectively the first corner, the second corner, the 3rd corner and the 4th corner of the display of described electronic equipment, described the first microphone, described second microphone, described the 3rd microphone and described the 4th microphone lay respectively on four summits of a rectangle; Describedly determine that on described the first image the zone corresponding to position of described sound source is first area, specifically comprise: described the first microphone is in described the first audio signal of the first sub-time point typing; Described second microphone is in described the first audio signal of the second sub-time point typing; Described the 3rd microphone is in described the first audio signal of the 3rd sub-time point typing; Described the 4th microphone is in described the first audio signal of the 4th sub-time point typing; Time difference based between the described first sub-time point, the described second sub-time point, described the 3rd sub-time point and described the 4th sub-time point, determine the position of described sound source; The described first area of location positioning based on described sound source.
Preferably, the described first area of the described location positioning based on described sound source is specially: the zone in a preset range around the position of described sound source is defined as to described first area.
The present invention provides a kind of electronic equipment on the other hand, comprising: the sound input device obtains the first audio signal constantly first; Image acquiring device, obtain described first the first image comprised with the corresponding sound source of described the first audio signal constantly; The first process chip, determine that on described the first image the zone corresponding to position of described sound source is first area, and the zone outside described first area is second area; The second process chip, reduce the view data of described second area.
Preferably, described electronic equipment also comprises a display, and described sound input device is positioned on described display; Described sound input device comprises: the first microphone, be arranged at the first corner of described display, and in described the first audio signal of the first sub-time point typing; Second microphone, be arranged at the second corner of described display, and, in described the first audio signal of the second sub-time point typing, wherein, described the first corner is relative with described the second corner; Wherein, specifically the time difference based on the described first sub-time point and described the second sub-time point is determined the position of described sound source the described first area of the location positioning based on described sound source to described the first process chip.
Preferably, described recording device also comprises one the 3rd process chip, for described described the first audio signal in the first sub-time point typing is mixed described first with described described the first audio signal in the second sub-time point typing constantly.
Preferably, described electronic equipment also comprises a display, and described sound input device is positioned on described display; Described sound input device comprises: the first microphone, be arranged at the first corner of described display, and in described the first audio signal of the first sub-time point typing; Second microphone, be arranged at the second corner of described display, and in described the first audio signal of the second sub-time point typing; The 3rd microphone, be arranged at the 3rd corner of described display, and in described the first audio signal of the 3rd sub-time point typing; The 4th microphone, be arranged at the 4th corner of described display, and in described the first audio signal of the 4th sub-time point typing; Wherein, described the first microphone, described second microphone, described the 3rd microphone and described the 4th microphone lay respectively on four summits of a rectangle; Described the first process chip is the time difference based between the described first sub-time point, the described second sub-time point, described the 3rd sub-time point and described the 4th sub-time point specifically, determine the position of described sound source, and the described first area of the location positioning based on described sound source.
Preferably, described the first process chip is defined as described first area specifically for the zone in a preset range around the position by described sound source.
The one or more technical schemes that provide in the embodiment of the present invention at least have following technique effect or advantage:
One embodiment of the invention adopt take pictures or video image in, determine the position of the sound source of sending audio signal on image according to audio signal, the zone that the position of this sound source is corresponding is defined as first area, then the view data in the zone outside first area is reduced, for example, when video conference, generally participant each side often only pays close attention to the spokesman, because the mankind to the concern meeting of sound source far above the concern to background, so utilize the scheme in the embodiment of the present application can be by auditory localization, then the background image data beyond the sound source zone is reduced, to adapt to the restriction of bandwidth, thus, do not need too complicated algorithm just can realize image compression, nor affect the main information of image, can assist on the contrary the participant, to the conference speech people, larger concern is arranged, some jamming pattern information have been reduced.
Further, in one embodiment of the invention, use microphone array to carry out auditory localization, setting accuracy is high, real-time good, in addition, because microphone array can form directional wave beam, receive only speaker's sound, i.e. spokesman's sound, also can suppress noise and interference in environment, so can more effectively collect audio signal.
The accompanying drawing explanation
The flow chart that Fig. 1 is the image processing method in one embodiment of the invention;
The structure chart that Fig. 2 is the electronic equipment in one embodiment of the invention;
Fig. 3 is the schematic diagram that the image in one embodiment of the invention is processed;
The functional block diagram that Fig. 4 is the electronic equipment in one embodiment of the invention.
Embodiment
The embodiment of the present invention provides a kind of image processing method and electronic equipment, the technical problem of inconvenient transmitting image or video data under the low network speed existed in the solution prior art, low bandwidth.
Technical scheme in the embodiment of the present invention is for solving above-mentioned technical problem, and general thought is as follows:
Take pictures or when video image, determine the position of the sound source of sending audio signal on image according to audio signal, the zone that the position of this sound source is corresponding is defined as first area, then the view data in the zone outside first area is reduced, for example, when video conference, generally participant each side often only pays close attention to the spokesman, because the mankind to the concern meeting of sound source far above the concern to background, so utilize the scheme in the embodiment of the present application can be by auditory localization, then the background image data beyond the sound source zone is reduced, to adapt to the restriction of bandwidth, thus, do not need too complicated algorithm just can realize image compression, nor affect the main information of image, can assist on the contrary the participant, to the conference speech people, larger concern is arranged, some jamming pattern information have been reduced.
In order better to understand technique scheme, below in conjunction with Figure of description and concrete execution mode, technique scheme is described in detail.
One embodiment of the invention provides a kind of image processing method, is applied on an electronic equipment, and as shown in Figure 1, the method comprises:
Step 110: first, constantly obtain the first audio signal and comprise the first image with the corresponding sound source of the first audio signal;
Step 112: zone corresponding to position of determining sound source on the first image is first area, and the zone outside first area is second area;
Step 114: the view data that reduces second area.
Wherein, in step 110, constantly obtaining the first audio signal first, can be to obtain by an independent microphone, can be also to obtain by microphone array, microphone array can form directional wave beam, receive only speaker's sound, spokesman's sound, also can suppress noise and interference in environment, so the first audio signal of obtaining by microphone array can be clearer, and substantially there is no noise and interference.
Further, the first image can obtain by a camera, and this camera can be to be embedded in this electronic equipment, can be also for example, to be connected on this electronic equipment by a connecting interface (USB interface) is outer, in other embodiments, can be also to obtain by other image acquiring devices.
In step 112, zone corresponding to position of determining sound source on the first image is first area, zone outside first area is second area, in specific implementation process, at first carry out auditory localization, after auditory localization completes, then the position of sound source is labeled on image, on the first image by the location positioning of sound source out.Wherein, the method of auditory localization has a variety of, for example microphone array is located, even be equally, utilize microphone array to carry out auditory localization, different processing modes is wherein also arranged, for example the steerable beam based on peak power output forms technology, poor based on the time of advent, below just take microphone array and carries out auditory localization and describe as example based on differing from the time of advent.
Please refer to Fig. 2, electronic equipment comprises a display 10, display 10 has respectively 4 turnings, the first turning 101, the second turning 102, the 3rd turning 103 and the 4th turning 104, electronic equipment also comprises the microphone array that four microphones form, wherein the first microphone 201 is arranged at 101 places, the first turning, second microphone 202 is arranged at the second corner 102 places, the 3rd microphone 203 is arranged at 103 places, the 3rd turning, the 4th microphone 204 is arranged at 104 places, the 4th turning, the first microphone 201, second microphone 202, the setting position of the 3rd microphone 203 and the 4th microphone 204 forms a rectangle, the first microphone 201, second microphone 202, the 3rd microphone 203 and the 4th microphone 204 lay respectively on four summits of this rectangle.
The concrete grammar that carries out auditory localization comprises: the first microphone 201 is in first sub-time point t1 typing the first audio signal; Second microphone 202 is in second sub-time point t2 typing the first audio signal; The 3rd microphone 203 is in the 3rd sub-time point t3 typing the first audio signal; The 4th microphone 204 is in the 4th sub-time point t4 typing the first audio signal; Time difference based between the first sub-time point t1, the second sub-time point t2, the 3rd sub-time point t3 and the 4th sub-time point t4, determine the position of sound source.
The implementation of the method for above-mentioned auditory localization also has multiple, for example supposes that the coordinate of the first microphone 201, second microphone 202, the 3rd microphone 203 and the 4th microphone 204 is respectively S1(x1, y1, z1), S2(x2, y2, z2), S3(x3, y3, z3) and S4(x4, y4, z4), the coordinate of position in space of supposing sound source is T(x, y, z), and four microphones have same sampling time benchmark, also can Real-time Obtaining to local sound propagation velocity c.Then utilize the distance between two points formula to equal sound propagation velocity and be multiplied by sound and be sent to the principle of the time of microphone from the position of sound source, can calculate the time difference vector of corresponding sound source position, and then can be in the hope of the position T(x of sound source, y, z).
In another embodiment, can also be four microphone typing the first audio signals, and output time-domain signal according to this, then time-domain signal is converted to respectively to frequency-region signal, for example, via fast Fourier transform, time-domain signal is converted to frequency-region signal, further carry out the cross spectral calculation according to frequency-region signal again, the wavefront that can obtain sound source enters microphone 202 to the wavefront of microphone 204 and sound source and enters the time of advent poor (t2-t1) ~ (t4-t1) between microphone 201, and according to the time of advent poor (t2-t1) ~ (t4-t1), microphone 201 just can draw the position of sound source to position and the sound propagation velocity c of microphone 204.
In another embodiment, also can adopt 2 timers to realize, suppose that the first microphone 201 first receives the first audio signal, the first audio signal first arrives, and opens first timer and carry out timing in the end points that the first audio signal detected; Then the first audio signal arrives second microphone 202, at the end points that the first audio signal detected, just closes first timer and timing data T1 is stored, and then the zero clearing first timer is opened second timer simultaneously and carried out timing; When the first audio signal arrives the 3rd microphone 203, when the end points of the first audio signal being detected, close second timer and timing data T2 is stored equally, again open first timer and carry out timing simultaneously; When the first audio signal arrives the 4th microphone 204, same when the end points of the first audio signal being detected, close first timer and timing data T3 is stored.
So just can obtain 3 time data T1, T2 and T3, be T1 the first microphone 201 and the second Mike's 202 time of delay, be that T1 adds T2 the time of delay of the first microphone 201 and the 3rd microphone 203, be that T1 adds T2 and adds T3 the time of delay of the first microphone 201 and the 4th microphone 204, and be T2 the time of delay of second microphone 202 and the 3rd microphone 203; Be that T2 adds T3 the time of delay of second microphone 202 and the 4th microphone 204; The rest may be inferred by analogy for it, then in conjunction with sound propagation velocity c, just can calculate the delay distance between every two microphones, further can calculate the particular location of sound source.
Although above, be to classify example as with the microphone array of four microphones formations to describe, but in actual applications, as long as the array be comprised of two microphones just can be realized having located, so as long as the quantity of microphone is more than or equal to two, for example as shown in Figure 2, only utilize the first microphone 201 and the 4th microphone 201, or the first microphone 201 and the 3rd microphone 203, or the combination of the 3rd microphone 203 and the 4th microphone 204.In addition, microphone also need not necessarily be arranged on corner, as long as mutually, distance is not too near, during apart from each other, antijamming capability is stronger, and it is more that the number of microphone arranges, the acoustic information that may obtain is also more, and auditory localization also can be more accurate.
Mode in above each embodiment is auditory localization mode well known to those skilled in the art, at this, just repeats no more.Certainly, also also have other account form, those skilled in the art can adopt different locate modes according to actual conditions.
In addition, according to the principle of microphone array, each microphone is processed in the first audio signal of similar and different time point typing, for example mixed, just can be obtained first audio signal in first moment.
After the physical location of having determined sound source, will determine based on this physical location corresponding sound source position on the first image, further the location positioning first area based on sound source again.
Concrete determine corresponding sound source position based on physical location on the first image, can be for example to use three dimensional virtual technique out virtual the physical location of point source of sound by the relative position relation between microphone array and camera, utilize camera position information and angle lens information to make the Softcam visual angle, use virtual technology to take virtual sound source physical location out with virtual camera visual angle, draw the two dimensional image of this physical location on the camera photographic images, obtain the two-dimensional coordinate of this physical location on the camera photographic images, so just can determine the sound source position of this physical location correspondence on the first image.For example, in Fig. 2, camera head 30, be for example camera be located in the microphone array same plane on, and be positioned at the centre of second microphone 202 and the 3rd microphone 203, and the camera visual angle is perpendicular to this plane, so be easy in this case just can determine the sound source position on the first image.In other embodiments, camera head 30 also can be arranged on other positions, or external camera, as long as but can get the relative position relation between microphone array and camera head 30.
In other embodiments, also can by other means the physical location of sound source be corresponded in the first image, those skilled in the art can be selected according to actual needs.
After on the first image, determining sound source position, just on the first image, by sound source position, corresponding zone is defined as first area.In specific implementation process, because what locate out with microphone array may be a point, it may be also a band, especially when a some, because the spokesman is a people, so spokesman's head image should also will be done an effect of giving prominence to the key points, so in this case, just this first area is determined in the zone in a preset range around the position of sound source position, for example, centered by sound source position, the scope that one preset value is radius is interior as first area, this preset value is for example 1cm or 2cm, generally, in this radius, spokesman's main image information just can have been included.
Although the sound source of take in above-described embodiment describes as example, but in some cases, the people of speech may have two or more simultaneously, even spokesman's position every distant, but the processing to each sound source is identical, those skilled in the art can according to aforesaid be described clearly know multi-acoustical the time implementation method, so do not repeat them here.
More than describe the specific implementation process of step 112 in detail, next will introduce the specific implementation process of step 114.
When determined first area and second area in step 112 after, before the first image transmitting is gone out, in order to adapt to low network speed and the inadequate situation of bandwidth, the view data of second area be reduced, thus, neither affect the reception of beholder to interested information, and because volume of transmitted data is little, so transmission speed is very fast, so for example when video conference, whole process can be more smooth, and real-time is better.
In one embodiment, the view data that reduces second area can realize by a lot of methods, for example at first the image of first area is split, then second area is carried out to High frequency filter, use low pass filter elimination radio-frequency component, because the details of image is all radio-frequency component, profile is all low-frequency component, so very large image detail elimination these data volumes, will greatly reduce background image data, and then first area is stitched together with Image Mosaics technology and the second area of processing, then transfer out.Again for example, can directly with mask technique, mask be carried out in first area, then directly the first image be carried out to High frequency filter, because there is mask first area, so only can carry out High frequency filter to second area, after finishing dealing with, again the mask of first area be removed and gets final product.
As shown in Figure 3, the first image 40 is the image after processing, the important personage that the first spokesman 401 and the second spokesman 402 and their side are arranged in a row has kept former resolution (row in the middle of image), and background 403 is what to process, so decrease resolution is fuzzyyer, given prominence to spokesman and VIP's image information, and unessential background image data is reduced, adapted to the restriction of bandwidth and network speed.Specifically, in video conference, each two field picture all being done to such processing, is also this effect so whole video embodies.
In the prior art, the method that reduces the view data of second area also has a lot, and those skilled in the art know these methods, so just repeat no more at this.
A kind of electronic equipment also is provided in another embodiment of the present invention, has please refer to Fig. 4, this electronic equipment comprises: sound input device 501 obtains the first audio signal constantly first; Image acquiring device 502, obtain first the first image comprised with the corresponding sound source of the first audio signal constantly; The first process chip 503, determine that on the first image the zone corresponding to position of sound source is first area, and the zone outside first area is second area; The second process chip 504, the view data of minimizing second area.
Refer again to Fig. 2, electronic equipment also comprises a display 10, display 10 has respectively 4 turnings, the first turning 101, the second turning 102, the 3rd turning 103 and the 4th turning 104, sound input device 501 is microphone arrays that comprise that four microphones form, wherein the first microphone 201 is arranged at 101 places, the first turning, in first sub-time point typing the first audio signal; Second microphone 202 is arranged at the second corner 102 places, in second sub-time point typing the first audio signal; The 3rd microphone 203 is arranged at 103 places, the 3rd turning, in the 3rd time point typing the first audio signal; The 4th microphone 204 is arranged at 104 places, the 4th turning, in the 4th sub-time point typing the first audio signal; The setting position of the first microphone 201, second microphone 202, the 3rd microphone 203 and the 4th microphone 204 forms a rectangle, and the first microphone 201, second microphone 202, the 3rd microphone 203 and the 4th microphone 204 lay respectively on four summits of this rectangle.Then the time difference of the first process chip 503 based between the first sub-time point, the second sub-time point, the 3rd time point and the 4th sub-time point, determine the position of sound source, then this first area of the location positioning based on sound source.
Certainly, in another embodiment, as long as the array be comprised of two microphones just can be realized having located, so as long as the quantity of microphone is more than or equal to two, for example as shown in Figure 2, only utilize the first microphone 201 and the 4th microphone 201, or the first microphone 201 and the 3rd microphone 203, or the combination of the 3rd microphone 203 and the 4th microphone 204.In addition, microphone also need not necessarily be arranged on corner, as long as mutually, distance is not too near, during apart from each other, antijamming capability is stronger, and it is more that the number of microphone arranges, the acoustic information that may obtain is also more, and auditory localization also can be more accurate.
In addition, according to the principle of microphone array, each microphone is processed in the first audio signal of similar and different time point typing, for example mixed, just can be obtained first audio signal in first moment.
Further, the first process chip is defined as first area specifically for the zone in a preset range around the position by sound source, and embodiment refers to the description in preceding method embodiment.
Specific implementation method and the second process chip 504 about the first process chip 503 specifically are described in detail the implementation method of the view data of second area minimizing in preceding method embodiment, do not repeat them here.
Electronic equipment also comprises a circuit board, and the first process chip 503 and the second process chip 504 are separate or be same chip and be arranged on this circuit board.
The electronic equipment that various variation patterns in image processing method in previous embodiment and instantiation are equally applicable to the present embodiment, by the aforementioned detailed description to image processing method, those skilled in the art can clearly know the implementation method of electronic equipment in the present embodiment, so succinct for specification, be not described in detail in this.
The one or more technical schemes that provide in the embodiment of the present invention at least have following technique effect or advantage:
One embodiment of the invention adopt take pictures or video image in, determine the position of the sound source of sending audio signal on image according to audio signal, the zone that the position of this sound source is corresponding is defined as first area, then the view data in the zone outside first area is reduced, for example, when video conference, generally participant each side often only pays close attention to the spokesman, because the mankind to the concern meeting of sound source far above the concern to background, so utilize the scheme in the embodiment of the present application can be by auditory localization, then the background image data beyond the sound source zone is reduced, to adapt to the restriction of bandwidth, thus, do not need too complicated algorithm just can realize image compression, nor affect the main information of image, can assist on the contrary the participant, to the conference speech people, larger concern is arranged, some jamming pattern information have been reduced.
Further, in one embodiment of the invention, use microphone array to carry out auditory localization, setting accuracy is high, real-time good, in addition, because microphone array can form directional wave beam, receive only speaker's sound, i.e. spokesman's sound, also can suppress noise and interference in environment, so can more effectively collect audio signal.
Obviously, those skilled in the art can carry out various changes and modification and not break away from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention also is intended to comprise these changes and modification interior.

Claims (10)

1. an image processing method, be applied to an electronic equipment, it is characterized in that, described method comprises:
Constantly obtain the first audio signal and comprise the first image with the corresponding sound source of described the first audio signal first;
Zone corresponding to position of determining described sound source on described the first image is first area, and the zone outside described first area is second area;
Reduce the view data of described second area.
2. the method for claim 1, it is characterized in that, described electronic equipment comprises the first microphone and second microphone, is arranged at respectively the first corner and second corner of the display of described electronic equipment, wherein, described the first corner is relative with described the second corner; Describedly determine that on described the first image the zone corresponding to position of described sound source is first area, specifically comprise:
Described the first microphone is in described the first audio signal of the first sub-time point typing;
Described second microphone is in described the first audio signal of the second sub-time point typing;
The time difference based on the described first sub-time point and described the second sub-time point is determined the position of described sound source;
The described first area of location positioning based on described sound source.
3. method as claimed in claim 2, it is characterized in that, describedly constantly obtain the first audio signal first and be specially: described described the first audio signal in the first sub-time point typing is mixed in described first moment with described described the first audio signal in the second sub-time point typing.
4. the method for claim 1, it is characterized in that, described electronic equipment comprises the first microphone, second microphone, the 3rd microphone and the 4th microphone, be arranged at respectively the first corner, the second corner, the 3rd corner and the 4th corner of the display of described electronic equipment, described the first microphone, described second microphone, described the 3rd microphone and described the 4th microphone lay respectively on four summits of a rectangle; Describedly determine that on described the first image the zone corresponding to position of described sound source is first area, specifically comprise:
Described the first microphone is in described the first audio signal of the first sub-time point typing;
Described second microphone is in described the first audio signal of the second sub-time point typing;
Described the 3rd microphone is in described the first audio signal of the 3rd sub-time point typing;
Described the 4th microphone is in described the first audio signal of the 4th sub-time point typing;
Time difference based between the described first sub-time point, the described second sub-time point, described the 3rd sub-time point and described the 4th sub-time point, determine the position of described sound source;
The described first area of location positioning based on described sound source.
5. method as claimed in claim 4, is characterized in that, the described first area of the described location positioning based on described sound source is specially:
Zone in a preset range around the position of described sound source is defined as to described first area.
6. an electronic equipment, is characterized in that, comprising:
The sound input device, obtain the first audio signal constantly first;
Image acquiring device, obtain described first the first image comprised with the corresponding sound source of described the first audio signal constantly;
The first process chip, determine that on described the first image the zone corresponding to position of described sound source is first area, and the zone outside described first area is second area;
The second process chip, reduce the view data of described second area.
7. electronic equipment as claimed in claim 6, is characterized in that, described electronic equipment also comprises a display, and described sound input device is positioned on described display; Described sound input device comprises:
The first microphone, be arranged at the first corner of described display, and in described the first audio signal of the first sub-time point typing;
Second microphone, be arranged at the second corner of described display, and, in described the first audio signal of the second sub-time point typing, wherein, described the first corner is relative with described the second corner; Wherein, specifically the time difference based on the described first sub-time point and described the second sub-time point is determined the position of described sound source the described first area of the location positioning based on described sound source to described the first process chip.
8. electronic equipment as claimed in claim 7, it is characterized in that, described recording device also comprises one the 3rd process chip, for described described the first audio signal in the first sub-time point typing is mixed described first with described described the first audio signal in the second sub-time point typing constantly.
9. electronic equipment as claimed in claim 7, is characterized in that, described electronic equipment also comprises a display, and described sound input device is positioned on described display; Described sound input device comprises:
The first microphone, be arranged at the first corner of described display, and in described the first audio signal of the first sub-time point typing;
Second microphone, be arranged at the second corner of described display, and in described the first audio signal of the second sub-time point typing;
The 3rd microphone, be arranged at the 3rd corner of described display, and in described the first audio signal of the 3rd sub-time point typing;
The 4th microphone, be arranged at the 4th corner of described display, and in described the first audio signal of the 4th sub-time point typing; Wherein, described the first microphone, described second microphone, described the 3rd microphone and described the 4th microphone lay respectively on four summits of a rectangle; Described the first process chip is the time difference based between the described first sub-time point, the described second sub-time point, described the 3rd sub-time point and described the 4th sub-time point specifically, determine the position of described sound source, and the described first area of the location positioning based on described sound source.
10. electronic equipment as claimed in claim 9, is characterized in that, described the first process chip is defined as described first area specifically for the zone in a preset range around the position by described sound source.
CN2012101812835A 2012-06-04 2012-06-04 Image processing method and electronic equipment Pending CN103458220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101812835A CN103458220A (en) 2012-06-04 2012-06-04 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101812835A CN103458220A (en) 2012-06-04 2012-06-04 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN103458220A true CN103458220A (en) 2013-12-18

Family

ID=49740124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101812835A Pending CN103458220A (en) 2012-06-04 2012-06-04 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103458220A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN106205573A (en) * 2016-06-28 2016-12-07 青岛海信移动通信技术股份有限公司 A kind of audio data processing method and device
CN107612881A (en) * 2017-08-01 2018-01-19 广州视源电子科技股份有限公司 Method, apparatus, terminal and the storage medium of picture are transmitted when transmitting file
CN108366245A (en) * 2018-03-16 2018-08-03 北京虚拟映画科技有限公司 Imaged image transmission method and device
CN108366244A (en) * 2018-03-16 2018-08-03 北京虚拟映画科技有限公司 video image transmission method and device
CN108419065A (en) * 2018-03-16 2018-08-17 中影数字巨幕(北京)有限公司 Image processing method and device
CN108419064A (en) * 2018-03-16 2018-08-17 中影数字巨幕(北京)有限公司 Image processing method and device
CN108682032A (en) * 2018-04-02 2018-10-19 广州视源电子科技股份有限公司 Control method, apparatus, readable storage medium storing program for executing and the terminal of video image output

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081504A1 (en) * 2001-10-25 2003-05-01 Mccaskill John Automatic camera tracking using beamforming
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US20040032487A1 (en) * 2002-04-15 2004-02-19 Polycom, Inc. Videoconferencing system with horizontal and vertical microphone arrays
CN1784900A (en) * 2003-05-08 2006-06-07 坦德伯格电信公司 Arrangement and method for audio source tracking
CN101350906A (en) * 2008-09-04 2009-01-21 北京中星微电子有限公司 Method and apparatus for correcting image
US20090323981A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Satellite Microphone Array For Video Conferencing
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN102348116A (en) * 2010-08-03 2012-02-08 株式会社理光 Video processing method, video processing device and video processing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081504A1 (en) * 2001-10-25 2003-05-01 Mccaskill John Automatic camera tracking using beamforming
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US20040032487A1 (en) * 2002-04-15 2004-02-19 Polycom, Inc. Videoconferencing system with horizontal and vertical microphone arrays
CN1784900A (en) * 2003-05-08 2006-06-07 坦德伯格电信公司 Arrangement and method for audio source tracking
US20090323981A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Satellite Microphone Array For Video Conferencing
CN101350906A (en) * 2008-09-04 2009-01-21 北京中星微电子有限公司 Method and apparatus for correcting image
CN102348116A (en) * 2010-08-03 2012-02-08 株式会社理光 Video processing method, video processing device and video processing system
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN106205573A (en) * 2016-06-28 2016-12-07 青岛海信移动通信技术股份有限公司 A kind of audio data processing method and device
CN107612881A (en) * 2017-08-01 2018-01-19 广州视源电子科技股份有限公司 Method, apparatus, terminal and the storage medium of picture are transmitted when transmitting file
CN107612881B (en) * 2017-08-01 2020-07-28 广州视源电子科技股份有限公司 Method, device, terminal and storage medium for transmitting picture during file transmission
CN108366245A (en) * 2018-03-16 2018-08-03 北京虚拟映画科技有限公司 Imaged image transmission method and device
CN108366244A (en) * 2018-03-16 2018-08-03 北京虚拟映画科技有限公司 video image transmission method and device
CN108419065A (en) * 2018-03-16 2018-08-17 中影数字巨幕(北京)有限公司 Image processing method and device
CN108419064A (en) * 2018-03-16 2018-08-17 中影数字巨幕(北京)有限公司 Image processing method and device
CN108682032A (en) * 2018-04-02 2018-10-19 广州视源电子科技股份有限公司 Control method, apparatus, readable storage medium storing program for executing and the terminal of video image output
CN108682032B (en) * 2018-04-02 2021-06-08 广州视源电子科技股份有限公司 Method and device for controlling video image output, readable storage medium and terminal

Similar Documents

Publication Publication Date Title
CN103458220A (en) Image processing method and electronic equipment
CN105340299B (en) Method and its device for generating surround sound sound field
US8605890B2 (en) Multichannel acoustic echo cancellation
WO2015184893A1 (en) Mobile terminal call voice noise reduction method and device
US8693713B2 (en) Virtual audio environment for multidimensional conferencing
CN104429100A (en) Systems and methods for surround sound echo reduction
CN108877827A (en) Voice-enhanced interaction method and system, storage medium and electronic equipment
USRE44737E1 (en) Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
CN112188368A (en) Method and system for directionally enhancing sound
WO2017152601A1 (en) Microphone determination method and terminal
US11580213B2 (en) Password-based authorization for audio rendering
US10999691B2 (en) Method for acquiring spatial division information, apparatus for acquiring spatial division information, and storage medium
US11575988B2 (en) Apparatus, method and computer program for obtaining audio signals
EP2519831A1 (en) Method and system for determining the direction between a detection point and an acoustic source
US20170188140A1 (en) Controlling audio beam forming with video stream data
US20210006976A1 (en) Privacy restrictions for audio rendering
CN106093866A (en) A kind of sound localization method being applicable to hollow ball array
CN103268766A (en) Method and device for speech enhancement with double microphones
US10991392B2 (en) Apparatus, electronic device, system, method and computer program for capturing audio signals
US10957334B2 (en) Acoustic path modeling for signal enhancement
CN105407443A (en) Sound recording method and device
CN110364159A (en) A kind of the execution method, apparatus and electronic equipment of phonetic order
CN111246345B (en) Method and device for real-time virtual reproduction of remote sound field
CN116569255A (en) Vector field interpolation of multiple distributed streams for six degree of freedom applications
WO2022262316A1 (en) Sound signal processing method and apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131218