CN108594988A - Wearable electronic device and its operating method for audio imaging - Google Patents

Wearable electronic device and its operating method for audio imaging Download PDF

Info

Publication number
CN108594988A
CN108594988A CN201810241438.7A CN201810241438A CN108594988A CN 108594988 A CN108594988 A CN 108594988A CN 201810241438 A CN201810241438 A CN 201810241438A CN 108594988 A CN108594988 A CN 108594988A
Authority
CN
China
Prior art keywords
sound
visual field
eyeglass
vision content
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810241438.7A
Other languages
Chinese (zh)
Inventor
王小迪
许哲维
谢易霖
李志弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Merry Electronics Shenzhen Co Ltd
Original Assignee
Merry Electronics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Merry Electronics Shenzhen Co Ltd filed Critical Merry Electronics Shenzhen Co Ltd
Priority to CN201810241438.7A priority Critical patent/CN108594988A/en
Publication of CN108594988A publication Critical patent/CN108594988A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of wearable electronic device, computer installation operating method and electronic system.This method includes:A sound is received, and multiple audio datas are accordingly generated according to the sound received;Determine sound source relative to the relative direction and relative position of wearable electronic equipment and the strength grade of sound;According to the relative direction, the strength grade of the sound and the relative position generate the vision content generated by computer corresponding with the sound;And by it is corresponding with sound by the vision content that computer is generated include on the transparent glasses lens of the display equipment of wearable electronic equipment.

Description

Wearable electronic device and its operating method for audio imaging
Technical field
The invention relates to a kind of wearable electronic device and its operating methods, and are used in particular to one kind Audio imaging wearable electronic device and its operating method, the wearable electronic device include head-mounted display and Multiple sound receivers.
Background technology
Augmented reality (AugmentedReality) is for true environment to be combined with the virtual information that computer generates. Augmented reality glasses combination transparent display (for example, can show the transparent glasses lens for image/film that computer generates), will Visual information is added to the visual field of wearer.
Invention content
Based on this, a kind of wearable electronic device of present invention offer, computer execute operating method and electronic system, energy It is enough to receive the sound that nearby sound source generates, analysis corresponding to received sound audio data (Audio Data), and generation with And sound its corresponding audio image that display is received, provide intensity of sound and sound by shown audio image The information of relative direction.
The wearable electronic device of the present invention includes front baffle, side frame, has transparent glasses lens (Transparent Lens the sound receiver array of display equipment, the multiple sound receivers being arranged on front baffle and side frame) and control Device processed.Display equipment is used for showing that vision content, wherein wearer can be seen really by eyeglass (Lens) on transparent glasses lens The scene in the world.Sound receiver array is used for receiving the sound generated by the sound source around wearable electronic device, and Multiple audio datas are correspondingly generated according to the sound received.Controller is coupled to display equipment and sound receiver battle array Row, to analyze audio data, to judge sound source relative to the relative direction of wearable electronic device and opposite position It sets, and judges intensity of sound rank (Intensity Level).In addition, controller is also according to relative direction, the intensity of sound And relative position generates vision content (for example, audio image Acoustic Image) to generate corresponding to sound computer. Show that equipment is used for showing that computer corresponding with sound generates vision content on eyeglass so that worn in the visual field by eyeglass Vision capture with person, wherein the visual field of eyeglass includes being shown in the vision content generated by computer on eyeglass and wearer The eyeglass visual field (View-via-lens) seen by eyeglass.
The computer of the present invention executes the sound receiver array received that operating method includes electronic device and is filled around electronics The sound that the sound source set is sent out, and multiple audios are correspondingly generated according to the sound received by sound receiver array Data;The controller of wearable electronic device is analyzed audio data to judge sound source and wearable electronic device phase To direction and opposite position, and judge intensity of sound rank;Controller is according to the relative direction, the intensity of sound grade The other and described relative position generates and computer corresponding with the sound and generates vision content;And by wearable electricity The display equipment of sub-device shows that computer vision content corresponding with sound, wherein wearer can pass through mirror on transparent glasses lens Piece sees the scene of real world so that the visual field obtained via eyeglass can be by the vision capture of wearer, wherein via mirror The visual field of the gained of piece include on eyeglass shown computer generate vision content and seen by eyeglass by wearer Real world scene.
The wearable electronic device that is there is provided based on above-described embodiment, the operating method executed for computer and electricity Subsystem can receive the sound generated by neighbouring sound source, and analysis corresponds to the audio data of received sound, and corresponding Ground generates and shows audio image corresponding with the sound received so that by being shown in the eyeglass visual field (View-via- Lens) or the audio image (computer generation vision content) of image that captures, user can obtain the intensity harmony of sound The information of relative direction/position of sound.To make the foregoing features and advantages of the present invention clearer and more comprehensible, implementation cited below particularly Example, and be described below in detail with reference to the accompanying drawings.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with The attached drawing of other embodiment is obtained according to these attached drawings.
Figure 1A is the schematic diagram according to a kind of wearable electronic device of the embodiment of the present invention.
Figure 1B is the structure diagram according to a kind of wearable electronic device of the embodiment of the present invention.
Fig. 2 is the flow chart that operating method is executed according to the computer of the embodiment of the present invention.
Fig. 3 A are the sound source vertical views except the eyeglass visual field according to the embodiment of the present invention.
Fig. 3 B are regarded about the first computer caused by the sound corresponding to sound source in Fig. 3 A according to the embodiment of the present invention Feel the schematic diagram of content.
Fig. 3 C are the eyeglass visual field (View-via- with the first computer vision content according to the embodiment of the present invention Lens schematic diagram).
Fig. 3 D are according to the embodiment of the present invention about another first meter caused by the sound corresponding to sound source in Fig. 3 A The schematic diagram of calculation machine vision content.
Fig. 3 E are the eyeglass visual field (View-via- with the first computer vision content according to the embodiment of the present invention Lens schematic diagram).
Fig. 4 A are the sound source vertical views within the eyeglass visual field according to the embodiment of the present invention.
Fig. 4 B are the eyeglass visual field (View-via- with second computer vision content according to the embodiment of the present invention Lens schematic diagram).
Fig. 4 C are the eyeglass visual field (View-via- with second computer vision content according to the embodiment of the present invention Lens schematic diagram).
Fig. 5 A are the schematic diagrames according to the real-world scene of the embodiment of the present invention.
Fig. 5 B are there is computer to generate vision content and correspond respectively to alternative sounds according to the embodiment of the present invention The schematic diagram in the eyeglass visual field (View-via-lens) of second computer vision content.
Fig. 5 C are that have the second computer vision content for corresponding respectively to alternative sounds according to the embodiment of the present invention The schematic diagram in the eyeglass visual field (View-via-lens).
Reference sign:
10:Wearable electronic device
101:Front baffle
102、103:Side frame
110:Sound receiver array
111(1)-(5)、112(1)-(5):Sound receiver
120:Display
121、122:Eyeglass
130:Controller
131:Processor
132:Audio image calculator
141:121 reference point
142:122 reference point
S21~S29:Step
300、601、602:Desk
301、502:Loud speaker
302、501:Phone
A1:Central angle
AD1:Corresponding azimuth between sound source and wearable electronic device
AP1-AP4 arrow patterns, the sound source being used to indicate out outside the eyeglass visual field
B1-B4:The boundary in the eyeglass visual field
BP1-BP4:Strip pattern
CM1:Circle
CP1:Pie pattern
CP2:Sound source pattern
CP2(1)-CP2(3):Pie pattern among sound source pattern CP2
CP3-CP5:Multiple border circular areas of different colours and size
CP6、CP7:Second computer generates vision content
D1:Direction
D11:Relative directions of the P1 relative to P2
D21:P3 relative to 141 relative direction
D22:P3 relative to 142 relative direction
D31:Relative directions of the P3 relative to P2
H1:The height of loud speaker
H2:The height of wearable electronic device reference point
P1:The position of loud speaker
P2:The reference point of wearable electronic device
P3:The position of phone
P4:Controller is according to the judged phone relative position of sound
P5:Controller is according to the judged loud speaker relative position of sound
PR1、PR2:The specific region of pie pattern
PS1-PS5 voice marks
PT1-PT3:Point at the tip of arrow pattern
S1、S2:Sound
VL1-VL6:The eyeglass visual field
VP1:The virtual point of reference point 141 is arrived by eyeglass 121
VP2:The virtual point of reference point 142 is arrived by eyeglass 122
X:X-coordinate axle (also referred to as X-direction)
Y:Y-coordinate axle (also referred to as Y-direction)
Z:Z coordinate axis (also referred to as Z-direction)
Specific implementation mode
To facilitate the understanding of the present invention, below with reference to relevant drawings to invention is more fully described.In attached drawing Give the better embodiment of the present invention.But the present invention can realize in many different forms, however it is not limited to herein Described embodiment.On the contrary, the purpose of providing these embodiments is that making to understand more the disclosure Add thorough and comprehensive.
It should be noted that when an element is considered as " connection " another element, it can be directly to separately One element may be simultaneously present centering elements.
Unless otherwise defined, all of technologies and scientific terms used here by the article and belong to the technical field of the present invention The normally understood meaning of technical staff is identical.Used term is intended merely to description tool in the description of the invention herein The purpose of the embodiment of body, it is not intended that in the limitation present invention.Term as used herein "and/or" includes one or more Any and all combinations of relevant Listed Items.
The application provides a kind of wearable electronics with multiple sound receivers (such as microphone) and transparence display Device.Wearable electronic device can be with glasses, the helmet, and the similar modes such as VR glasses or AR glasses are worn.By making With by microphone and beam-forming technology (BeamformingTechnique) and meanwhile obtain audio data (signal), sound source Information (computer generates vision content or audio image) is able to show on transparent display.
As shown in Figure 1A, in this example, it is assumed that user dresses wearable electronic device 10, from wearer/can The direction of the back-to-front of wearable electronic device corresponds to the Y-coordinate axle (also referred to as Y-direction) of orthogonal coordinate system, from wearing The direction on the left side of person/wearable electronic device to right side corresponds to X-coordinate axle (also referred to as X-direction), and from wearer/ The direction at the bottom of wearable electronic device to top corresponds to the Z coordinate axis (also referred to as Z-direction) of orthogonal coordinate system.
Wearable electronic device 10 includes front baffle 101 and side frame 102 and 103, the wherein first end of side frame 102 The both sides of front baffle 101 are connected to the first end of side frame 103, as shown in Figure 1A.In another embodiment, side frame 102 second end can be directly connected to the second end of side frame 103.
Wearable electronic device 10 further includes sound receiver array 110, and sound receiver array 110 includes being arranged in Multiple sound receivers 111 (1) on front baffle 101 and side frame 102 and 103 are to 111 (N) and 112 (1) to 112 (N).N 2 positive integer can be equal to or greater than.For example, sound receiver 111 (1) is disposed in side frame 102 (left side frame) (second end), sound receiver 111 (2) -111 (5) equally spaced configure a part (left part) in front baffle 101 and turn At angle, sound receiver 112 (1) configures (second end) in side frame 103 (right side frame), 112 (2) -112 of sound receiver (5) corner in another part (right half) of front baffle 101 is equally spaced configured.It is noted that sound receiver Configuration be only used for attached drawing mark, the quantity of sound receiver is not so limited in embodiment.In some embodiments, according to Actual needs can add one or more sound receivers between each two sound receiver as shown in Figure 1A.At this In embodiment, for example, sound receiver can receive sound caused by (detection) wearable 10 surrounding of electronic device The microphone of sound or other kinds of sensor (transducers).Each sound receiver of sound receiver array can be with Audio data is generated according to the sound received.
Wearable electronic device 10 further includes controller 130 and display 120, and wherein controller includes processor 131 With audio image calculator 132, and controller 130 is coupled to display 120 and sound receiver array 110.
In the present embodiment, processor 131 may include the central processing unit (CPU) of wearable electronic device 10, To control the integrated operation of wearable electronic device 10.In certain embodiments, processor 131 is stored in non-by load Software or firmware in volatile computer readable storage medium storing program for executing (or other storage device/circuits), and execute its software or solid Part (for example, processor 131 is programmed), to carry out operating method provided in this embodiment.Processor 131 can be or can be with Including one or more programmable universals or special microprocessor (General-purpose or Special-purpose Microprocessors, DSP), digital signal processor (Digital Signal Processor, DSP), programmable control Device processed, application-specific integrated circuit (Application Specific Integrated Circuits, ASIC), programmable logic device Part (Programmable Logic Devices, PLD) or other similar devices or combination.
In the present embodiment, audio image calculator 132 is in one or more algorithm/methods (for example, beam forming Beamforming, angle of arrival (Angle ofarrival, AoA), reaching time-difference (Time Difference OfArrival, TDOA), poor (Frequency the difference ofarrival, FDOA) of arrival rate or other similar phases The technology and algorithm/method of pass) institute's sequencing circuit or chip, by inputting the audio data generated according to sound (or sound Sound signal) calculate the corresponding arrival direction of sound (Direction ofarrival, DOA) generated by sound source.For example, sound Sound receiver 111 (1) receives sound, and correspondingly generates the audio data for receiving sound.Next, sound receiver Audio data is sent to the audio image calculator 132 of controller 130 by 111 (1).Audio data is input to audio image meter Calculate device 132, and will be by the arrival direction corresponding to sound that sound receiver 111 (1) receives, from sound by calculating its operation Frequency image calculator 132 exports.The sound that audio image calculator 132 can will be received by sound receiver 111 (1) is corresponding Arrival direction is sent to processor 131.The corresponding arrival direction of sound (DOA) received by sound receiver 111 (1) can provide About (related with the sound receiver of the sound is received) spatial information of sound from direction.The spatial information provided by DOA It can be the direction vector in orthogonal coordinate system (3D direction vectors).
Show that equipment 120 includes one or more transparent glasses lens (such as eyeglass 121 and eyeglass 122).Eyeglass can be arranged In the hollow region of front baffle 101.In the present embodiment, the sight of wearer can see real world across transparent glasses lens Scene (that is, eyeglass visual field View-via-lens).In other words, the vision of wearer can capture true generation by eyeglass The scene on boundary.Display equipment 120 is configured in a manner of the picture signal of vision content is directly sent to eyeglass, transparent Drafting (Render) (can also claim, calculation is painted or nomogram) goes out vision content on the display surface of eyeglass, so that eyeglass itself is according to figure Vision content is shown as signal.For example, eyeglass can be transparent organic light emitting diode (OLED) display panel, actively Matrix organic LED (AMOLED) display panel, field sequence type liquid crystal display (FS-LCD) panel or thin film transistor (TFT) liquid Crystal display (TFT-LCD) panel.The eyeglass visual field is indicating all visuals that wearer is seen by eyeglass (including the scene of real world and drawn vision content on a display surface).
In another embodiment, display equipment 120 can draw the vision content of (Render) appearance on piece, and mode is to incite somebody to action The image of vision content projects the eyeglass display surface of the transparent reflecting layer comprising the vision content image that can reflect projection.
In one embodiment, eyeglass may include, for example, resistive touch sensor (Resistive Touch Sensor), capacitive touch sensors (Capacitive Touch Sensor), and/or for being executed on eyeglass for detection Touch operation (action) other suitable types touch sensor.
Pass through the sound receiver array 110 of electronic device 10, sound receiver array in the step s 21 with reference to Fig. 2 Receive the sound that the sound source around electronic device 10 is sent out.For example, in figure 3 a, (its position is wearable for loud speaker Around formula electronic device 10) make a sound S1 (output sound S1), and sound receiver array (each sound receiver) Detect (reception) sound.Next, in step S23, sound receiver array generates multiple sounds according to the sound received Frequency evidence.Specifically, each sound receiver receives sound S1, and is joined according to its received sound pressure or other audios The possible different sound of number generates audio data (also referred to as audio signal or voice signal).Then, sound receiver array The multiple audio datas generated by different sound receivers can be sent to controller 130 by 110.
In step s 25, controller 130 executes analysis to judge sound source relative to electronic device to multiple audio datas Relative direction, and judge intensity of sound rank, and relative position of the sound source relative to electronic device is judged according to relative direction.
Specifically, in the present embodiment, audio image calculator 132 receives respectively to the sound received by sound Multiple audio datas that receiver 111 (1) generates respectively to 111 (5) and 112 (1) to 112 (5), and audio image calculator 132 can calculate DOA to each audio data, to obtain the corresponding DOA data of multiple audio datas respectively (by audio data It calculates DOA and obtains DOA data).DOA data include spatial information, and direction vector therein can be represented from sound source position Set the directions 3D of the position for the sound receiver that the audio data corresponding to DOA data is provided.
For example, audio image calculator 132 (is also referred to as sound S1_111 to the sound S1 received by microphone 111 (1) (1)) audio data (also referred to as audio data S1_111 (1)) generated executes DOA and calculates, then by calculating DOA outputs DOA data (also referred to as DOA data S1_111 (1) or DOA data 111 (1) _ S1).DOA data S1_111 (1) can be from sound The position of the sound source 301 of sound S1 starts the opposite 3D direction vectors to the position of sound receiver 111 (1);DOA data 111 (1) _ S1 can be the position of the sound source 301 to sound S1 since the position of sound receiver 111 (1) the opposite directions 3D to Amount.Calculated multiple DOA data can be sent to processor for further analyzing by audio image calculator 132.
Processor 131 can analyze with multiple DOA data corresponding to sound, to judge sound source relative to wearable electricity The relative direction (whole relative direction vector) of sub-device.In more detail, processor 131 can be corresponding multiple according to sound DOA data calculate the relative direction between the position of sound source and the position of the reference point of wearable electronic device.
In addition, processor 131 can analyze multiple DOA data corresponding to sound, to judge sound source and wearable electricity The relative position (whole relative coordinate) of sub-device.In more detail, processor 131 can be according to multiple DOA corresponding to sound Data calculate the opposite position between the position (coordinate) of sound source and the position (coordinate) of the reference point of wearable electronic device It sets.
In the present embodiment, processor 131 can analyze multiple audio datas to judge intensity of sound rank.Further come It says, only when the intensity of sound of judgement is more than intensity threshold, processor 131 can just start to execute step S27, to prevent sound Receiver array 110 because the small background noise that receives generate corresponding to vision content.Moreover, in one embodiment, control Device 130 further includes voice band filter (Audio Filter), and voice band filter filters audio data to prevent generated audio Data are influenced by the sound and spacial aliasing (SpatialAliasing) of wearer.In other words, sound receiver array The sound (sound for not receiving wearer) of wearer can be eliminated by voice band filter.
In one embodiment, using voice band filter by sound focusing in desired frequency band (Frequency Band) In.For example, speech audio filter can focus the sound of 1KHz to 2KHz frequencies.
In step s 27, according to relative direction, intensity of sound rank and relative position, controller 130 can generate and sound The corresponding computer of sound generates vision content.Specifically, processor 131 judges (to estimate according to relative direction and relative position Meter) whether sound source image can be seen by wearer (that is, it is judged that whether sound source image is by eyeglass position in the visual field of wearer It is interior) to generate different types of computer vision content is generated (for example, the first computer generates vision content and second computer Generate vision content).Computer generates the size of vision content or color and can be adjusted according to different intensity of sound ranks.Sound The units of measurement of loudness of a sound degree can be, for example, " Pa " or " dB ".Fig. 3 A to 3E and 4A to 4C will describe different types of computer The more details of vision content are generated,.
Next, in step S29, computer generation vision content corresponding with sound is plotted in display and set by display equipment On standby transparent glasses lens, wherein the eyeglass visual field can be seen through eyeglass from eyeglass side in the wearer of wearable electronic device (View-via-lens)。
Specifically, processor 131 generate corresponding with sound computer generation vision content (image/image) it Afterwards, send it to display equipment 120, and show equipment 120 by can representative voice computer generate vision content draw On eyeglass so that be able to capture the eyeglass visual field (View-via-lens) by the vision of wearer, wherein the eyeglass visual field (View-via-lens) include on eyeglass shown computer generate vision content and wearer seen by eyeglass it is true World Scene is (that is, wearer can be appreciated that shown computer generates the reality of vision content and position before eyeglass on eyeglass The scene in the world).
With reference to Fig. 3 A, for example, it is assumed that wearer dresses wearable electronic device 10, phone 302 is located at wearable electricity On desk 300 in front of sub-device, and wearer sees phone 302 (by eyeglass 121 or eyeglass 122,302, phone exists Visible range);The loud speaker 301 for making a sound S1 is located at the left side of wearable electronic device 10, and wearer cannot see that Loud speaker 301 (loud speaker 301 is more than the visible range of eyeglass 121 or eyeglass 122).
Sound receiver 111 (1) receives sound S1 to 111 (5) and 112 (1) to 112 (5) and generates multiple audio datas To controller 130.Controller 130 judges the position P1 and wearable electronics dress of loud speaker 301 according to the analysis of audio data Set the relative direction D11 and relative position of 10 reference point P2.For example, if the seat of wearable electronic device reference point P2 When mark is (0,0,0), the coordinate of loud speaker and the relative position of wearable electronic device is exactly the coordinate of position P1.
In the present embodiment, whether controller 130 judges loud speaker 301 according to relative direction D11 and relative position In the eyeglass visual field (View-via-lens).Specifically, as shown in Figure 3 C, processor 131, such as pass through wearer's Eyes carry out calibration procedure, it can be determined that contra corresponding with the boundary B 1 of the eyeglass visual field (View-via-lens) VL1 and B2 The first range to (on an x-y plane) and phase corresponding with the boundary B 3 of the eyeglass visual field (View-via-lens) VL1 and B4 To second range of direction (on Y-Z plane).Then, processor 131 can be by judging the relative direction on X-Y plane Whether whether D11 in the first range judge the image of loud speaker 301 in the eyeglass visual field (View-via-lens) VL1. In the case of Fig. 3 A, relative direction D11 is judged as not in the first range, and processor 131 judges the figure of loud speaker 301 As (not judging loud speaker 301 not in eyeglass visual field VL1) in the eyeglass visual field (View-via-lens) VL1.That is, processor 131 Judgement wearer cannot see loud speaker 301 through eyeglass.
In addition, controller 130 always judges the counterparty between sound source and wearable electronic device also according to contra Parallactic angle AD1.The correspondence azimuth in the directions D1 is zero degree.
For example, according to relative direction D11, controller 130 can be by identifying position with wearable electronic device 10 The crosspoint PS1 of circle CM1 and relative direction D11 centered on reference point P2, to judge loud speaker 301 and wearable electronics Correspondence azimuth AD1 between device 10, and the azimuth AD1 in X-Y plane (2D planes) is judged according to point PS1.Orientation Angle AD1 is used for indicating the direction of the sound S1 from X-Y plane.
In the present embodiment, if judging sound source not in the eyeglass visual field (View-via- according to relative direction and relative position Lens) in, then controller generate corresponding to sound the first computer generate vision content, indicate intensity of sound rank and Relative direction, and show that equipment shows that this first computer generates vision content on eyeglass.
As shown in Figure 3B, continue example in Fig. 3 A, loud speaker 301 is judged out not in the eyeglass visual field (View-via- Lens) in VL1, and processor 131 can generate the first computer corresponding with sound S1 and generate vision content, wherein first It can be pie pattern CP1 and multiple segments (different angle for corresponding respectively to pie pattern) that computer, which generates vision content, Color be to be judged according to the intensity of sound rank from direction corresponding with multiple segments.In the following description, more A segment can also be referred to as multiple regions.For example, the center of the region PR1 corresponding to loud speaker 301 (sound source) is decided to be Relative direction D11 and its center are for the intersection point (Cross- of the circle CM1 of 10 reference point locations P2 of wearable electronic device Point), and according to the intensity of sound rank of the sound S1 from loud speaker 301 judge the color of region PR1.Because point Not Dui Yingyu PR2 and PR1 multiple segments intensity of sound rank it is different, so the region PR2 of pie pattern CP1 (multiple Section PR2) color (the second color) and pie pattern CP1 region PR1 (multiple segment PR1) color (the first color) no Together.In other words, processor 131 is by the central angle A1 phases of sound source (or sound) corresponding region (or segment) in pie pattern Equal to corresponding azimuth AD1, the regional location relative to loud speaker is arranged.Therefore, there is the first color (to correspond to first Intensity of sound rank) and the segment of first angle of pie pattern can be represented as having the sound of the first intensity of sound rank, Come from the azimuth direction identical with the first angle of pie pattern of sound source.In one embodiment, processor 131 can To add voice mark PS1 corresponding with sound in pie pattern CP1, and the position of voice mark PS1, color and/or Size can judge according to acoustic characteristic (for example, intensity of sound rank, relative direction).If detecting multi-acoustical, phase Multiple voice marks (for example, PS1) are accordingly added at the specific region (for example, PR1, PR2) of pie pattern CP1.
In the present embodiment, the first computer generate vision content include arrow pattern, wherein the color of arrow pattern or Size judges by controller according to intensity of sound rank, the slope of wherein arrow pattern be by controller according to contra always Judge, it is the side for being directed toward the eyeglass visual field (View-via-lens) that wherein arrow pattern, which is drawn (is rendered), described Side is judged by controller, and the side is identical as the side of the associated sound source of wearable electronic device.
For example, referring to Fig. 3 C, it is assumed that the real-world scene of the eyeglass visual field (View-via-lens) VL1 shows phone 302 on desk 300, and the eyeglass visual field (View-via-lens) VL1 further includes representing the first computer to generate vision content The arrow pattern AP1 iconified, the sound source being used to indicate out outside the eyeglass visual field (View-via-lens) VL1.
It is used to indicate that from each it should be noted that the embodiment depicted in Fig. 3 C includes computer generation vision content The sound in direction, represents sound imaging in three dimensions.Other than arrow pattern AP1, AP2, AP3, also it is painted in Fig. 3 B The pie pattern CP1 gone out.Pie pattern CP1 represents azimuth of each sound source relative to X-Y plane, and arrow pattern AP1, The slope of AP2, AP3 indicate the elevation angle (also referred to as elevation angle Altitude of the sound source each detected relative to X-Z plane Angle).Point PS1, PS2, PS3 on pie pattern CP1 correspond respectively to the point at the tip of arrow pattern AP1, AP2, AP3 PT1、PT2、PT3.In other words, pie pattern CP1 and arrow pattern AP1, AP2, AP3 show sound imaging in three dimensions.
Specifically, being judged as not the loud speaker 301 of sound S1 in the eyeglass visual field (View-via-lens) VL1 Interior, processor 131 starts to generate arrow pattern AP1 as the first computer generation vision content.First, 131 basis of processor Sound S1 intensity of sound rank (for example, larger size correspond to larger intensity of sound rank or deeper color correspond to compared with Big intensity of sound rank) judge the size or color of arrow pattern AP1.Then, processor 131 according to loud speaker 301 with The relative direction or relative position of wearable electronic device 10 determines the side of loud speaker 301, and the side judged is Arrow pattern AP1 is in the upper pointed sides the eyeglass visual field (View-via-lens) VL1.
In one embodiment, processor 131 judges arrow pattern according to the relative direction of loud speaker (on Y-Z plane) The slope of AP1, the i.e. elevation angle relative to wearable electronic device.For example, the slope of arrow pattern AP1 is zero, then it represents that raise Sound device 301 has height identical as wearable electronic device.
In one embodiment, the position of the point pointed by arrow pattern and the slope of arrow pattern can contribute to indicate Corresponding sound from direction.For example, arrow pattern AP2, which is directed toward height, is located at point PT2 below among left side, this expression Sound is from positioned at a lower height of sound source than wearable electronic device.Moreover, according to the slope (root of arrow pattern AP2 Judge according to the sound source positioned at arrow pattern AP2 and the corresponding azimuth judged between wearable electronic device), with arrow The corresponding sound sources of pattern AP2 are that the lower left side from wearable electronic device (is extended on the inclined-plane along arrow pattern AP3 It goes, reaches the center by the eyeglass visual field (View-via-lens) VL1, and azimuth can be obtained, azimuth indicates Go out the relative bearing between sound source and wearable electronic device).In another embodiment, arrow pattern AP3 is indicated It is from the upper right side of wearable electronic device to go out corresponding sound.The position of the sound source of arrow pattern AP3 is regarded in eyeglass Except the range of open country (View-via-lens) VL1 (for example, in space on the crown of wearer, and the space be regarding Except line).
In the present embodiment, it further includes strip pattern (BarPattern) that the first computer, which generates vision content, similar In pie pattern, controller judges the color of strip pattern according to intensity of sound rank, and strip pattern is plotted in close to mirror The side in the piece visual field (View-via-lens) is judged to be located at which side by controller, and is filled with relative to wearable electronics The sound source set is located at homonymy.
Fig. 3 D be an embodiment according to the present invention about generate in Fig. 3 A corresponding to the sound of source of sound another first The schematic diagram for the vision content that computer generates.
In the present embodiment, it can be judged by processor 131 and be shown the height relationships of wearer and sound source.Such as Fig. 3 D institutes Show, processor 131 can judge to use by judging the height H1 in strip pattern BP1 according to the relative direction D11 on X-Z plane In the position for the voice mark PS5 for indicating sound S1.In this case, due to height H1 be judged as it is identical as height H2, For indicating that the position of the voice mark PS5 of sound S1 is judged as the centre positioned at strip pattern BP1.In another implementation In example, if the height of sound is judged as the height less than wearable electronic device, for indicating corresponding sound The position of segment is judged as positioned at the lower part of strip pattern.Further citing, if the height of sound is judged as higher than can The height of wearable electronic device is then used to indicate that the position of corresponding sound clip is judged positioned at the top of strip pattern.
In the present embodiment, processor 131 can respectively around the eyeglass visual field (View-via-lens) VL2 four Four strip patterns are generated on side.With reference to Fig. 3 E, loud speaker 301 is judged as not in the eyeglass visual field (View-via-lens) VL1 It is interior, and processor 131 can show strip pattern BP1, and show and generated as the first computer corresponding with sound S1 The voice mark PS2 of vision content.Processor 131 according to relative direction D11 judge the height H1 of loud speaker 301 with it is wearable The height H2 of the reference point of electronic device is identical, and is plotted in close to the eyeglass visual field (View-via-lens) compared with dark patches section The centre of the strip pattern BP1 in the left side of VL2 can indicate that the loud speaker 301 of sound S1 is to be located at wearable electronic device Left side, and the height of loud speaker 301 is identical as the height of wearable electronic device.In another embodiment, it is painted Two at the strip pattern BP2 close to the top of the eyeglass visual field (View-via-lens) VL2 are made compared with dark patches section, can refer to Show that two corresponding sound are respectively from the upper left side and upper right side of the wearable electronic device in the eyeglass visual field.Processor 131 can add corresponding voice mark on strip pattern (for example, voice mark PS2, PS3 and PS4).In an embodiment In, strip pattern BP3 or BP4 can also be respectively displayed on the eyeglass visual field (View-via-lens) VL2 left side and bottom it is right And the present invention is not intended to limit the display mode of strip pattern, strip pattern BP1, BP2, BP3 or BP4 can detected only To display when sound source or generate.
With reference to Fig. 4 A, the example being similar in Fig. 3 A, it is assumed that the telephone set 302 for making a sound S2 is filled in wearable electronics It sets on the desk 300 of front, and wearer can see that telephone set 302 and desk 300 (for example, the real generation in the eyeglass visual field Boundary's scene table reveals phone on desk);There is no the loud speaker 301 of sound in the left side of wearable electronic device 10, and Loud speaker 301 is not seen (loud speaker 301 is more than the field range watched from eyeglass 121 or eyeglass 122) by wearer.
Sound receiver 111 (1) to 111 (5) and 112 (1) to 112 (5) receive sound S2, and are produced to controller 130 Raw multiple audio datas corresponding with sound S2.Controller 130 judges the position P3 of phone 302 according to the analysis of audio data The relative direction D31 and relative position of reference point P2 relative to wearable electronic device 10.
In the present embodiment, controller 130 judges the sound source of sound S2 according to relative direction D31 and relative position P3 Whether (phone 302) is located in the eyeglass visual field.In more detail, as shown in Figure 4 A, processor 131 according to relative direction D31 with And the relative position of reference point P2,141 and 142, can calculating position P3 and corresponding to eyeglass 121 reference point 141 it is opposite The relative direction D22 of direction D21 and position P3 and reference point 142 corresponding to eyeglass 122.Reference point 141, for example, It can be the position in the eyeglass visual field captured by the vision of the left eye of wearer;And reference point 142 can be passed through The position in the eyeglass visual field that the vision of the right eye of wearer is captured.In another embodiment, reference point 141,142 can be by It is appointed as corresponding respectively to the position of image lenses plane.For example, computer shown on eyeglass 121 generate vision content and The real-world scene seen by eyeglass 121 can be trapped on imaging plane corresponding with eyeglass 121.
Similar to above by taking left lens 121 as an example, processor 131 judges and corresponds to eyeglass 121 and reference point 141 The eyeglass visual field boundary B 1 and the relative direction (on an x-y plane) corresponding to B2 the first range (horizontal extent), and Corresponding to the second range (vertical range) of the relative direction (on Y-Z plane) of the B3 and B4 in the eyeglass visual field of eyeglass 121.It connects It, processor 131 can be by judging the relative direction D21 on X-Y plane whether in horizontal extent, or by judging Relative direction D21 on the vertical plane that orthogonal coordinates are fastened whether on a vertical plane, come judge phone 302 image whether In the eyeglass visual field (View-via-lens) VL3.In the situation of fig. 4 a, relative direction D21 is judged as being in horizontal extent It is interior, and processor 131 judges that the image of phone 301 is positioned corresponding in the eyeglass visual field VL3 of eyeglass 121 that (phone 302 is judged to It is disconnected to be located in the VL3 of the eyeglass visual field).In one embodiment, processor can be by judging phone 302 and wearable electronic device Relative position whether confirm/judge the image of phone 302 whether in mirror in virtual viewing space (corresponding to eyeglass 121) In the VL3 of the piece visual field, wherein each position is judged as in eyeglass visual field VL3 (corresponding to eyeglass 121).
In the present embodiment, if sound source is to be judged as being located at the eyeglass visual field according to relative direction and relative position (View-via-lens) in, then the corresponding second computer of sound, which generates vision content, will will include sound source pattern, wherein root The color or size of sound source pattern are judged according to the intensity of sound rank, controller is according to corresponding to wearable electronic device The relative position and relative direction of sound judge virtual coordinates corresponding with sound source in the eyeglass visual field (View-via-lens), and Show that second computer is generated vision content and draws (Render) on the virtual coordinates of eyeglass to be shown on eyeglass by equipment Second computer generates vision content so that second computer generates vision content and is plotted in the eyeglass visual field (View-via- Lens on the position of the sound source in).
For example, according to the relative position of relative direction D21 and position P3 and reference point 141, processor 131 can calculate 302 corresponding virtual coordinates of phone in the eyeglass visual field (View-via-lens) VL3 of eyeglass 121.The virtual coordinates are opposite Direction D21 (is worn from position P3 by the coordinate of the virtual point VP1 of eyeglass 121 to reference point 141 via eyeglass visual field VL3 The left vision of person obtains in reference point 141).In other words, virtual point VP1 represents the eyeglass visual field (View- of eyeglass 121 Via-lens) the center of the image of the phone 302 in VL3.Corresponding to phone 302 in the eyeglass visual field of eyeglass 121 through sentencing The virtual coordinates of disconnected (being computed) can be used to show that second computer corresponding with sound S2 generates in vision on eyeglass 121 Hold.Similarly, virtual point VP2 represents the center of the image of the phone 302 in the eyeglass visual field of eyeglass 122 and corresponding void Quasi-coordinate is used to show that second computer corresponding with sound S2 generates vision content on eyeglass 122.
As shown in Figure 4 B, according to example in Fig. 4 A, phone 302 is judged as position in the eyeglass visual field (View-via- now Lens) in VL3, and processor 131 can generate the second computer generation vision content corresponding to sound S2, wherein second It can be sound source pattern CP2 that computer, which generates vision content,.In the present embodiment, sound source pattern can be one or more pies Pattern, color and/or size can judge according to the intensity of sound rank of sound S2.As shown in Figure 4 B, via the eyeglass visual field The real-world scene show of VL3 goes out the phone 302 on desk 300, and the eyeglass visual field (View-via-lens) VL3 is also wrapped Sound source pattern CP2 (for example, pie pattern CP2 (1), CP2 (2) and CP2 (3)) is included, is plotted in real-world scene (in mirror In the VL3 of the piece visual field) in phone 302 image position on.In one embodiment, corresponding with acoustic pattern CP2 sound Intensity can be shown (more circles indicates bigger intensity of sound) by the quantity of the pie pattern of sound source pattern CP2. In another embodiment, the corresponding intensities of sound of sound pattern CP2 can pass through cake according to preset color table (ColorMap) The color of shape pattern is shown.
In the present embodiment, the preset color table is Jet color tables (Jet Colormap), in Jet color tables Multiple color can be corresponding in turn to multiple intensity of sound ranges, and intensity of sound is corresponded to Jet color tables.Need the ground illustrated Fang Shi, the present invention is not limited thereto, for example, in another embodiment, preset color table can be any suitable color table, Such as hsv color table, cool colour system table (Cool Colormap), grey colour system table (Gray Colormap), thermocolour system table (Hot Colormap), line color table (Lines Colormap) etc..
The present invention does not limit the shape of sound source pattern.For example, as shown in Figure 4 C, eyeglass visual field VL4 includes being respectively provided with not With the sound source pattern of multiple border circular areas CP3, CP4 and CP5 of color and size.In addition, in one embodiment, processor 131 It can will correspond respectively to multi-acoustical pattern and be mixed into the sound source pattern with irregular shape, wherein different colour attaching areas The different intensity of sound of domain representation.Furthermore as described above, the color of border circular areas CP3, CP4 and CP5 are according to preset color Table determines.For example, when the color of border circular areas CP3, CP4 and CP5 for being judged according to Jet color tables, processor 131 can The intensity of sound image of sound source to be judged to the color of border circular areas CP3, CP4 and CP5 to Jet color tables.
With reference to Fig. 5 A, background image is shown as the scene of the real world before wearable electronic device, and field Scape shows that embodiment is similar with depicted in Fig. 3 A and 3C.As shown in Figure 5A, the scene 511 of real world is shown, wearable Before electronic device, phone 501 is located on the desk 601 of wearer right front, and loud speaker 502 is located at wearer left front Desk 602 on.It is assumed that phone and loud speaker 502 all make a sound, and a part for the scene of real world is captured as Real-world scene in the VL5 of the eyeglass visual field.Controller 130 can judge phone 501 and loud speaker according to their sound 502 relative position P4 and P5.In addition, wearer can be seen phone 501 and desk 601 (phone 501 by eyeglass 121 or In the range of 122 visual fields of eyeglass).
As shown in Figure 5 B, controller 130 judges sound source 501 in the VL5 of the eyeglass visual field, and another sound source 502 is not in eyeglass In the VL5 of the visual field.Also, controller 130 correspondingly generates computer as shown in Figure 5 B and generates vision content CP6 and AP4.Such as Upper described, the content that the two computers generate provides the information of wearable electronic device ambient sound, and an information comes from mirror Source in the piece visual field, another information is from the source not in the eyeglass visual field.Also, the first computer generates vision content CP6 can represent sound source corresponding with the first computer generation vision content CP6 and be located at the first computer with the real world life At position at vision content CP6.Second computer generation vision content AP4 can be represented to be regarded corresponding to second computer generation Feel that another sound source of content AP4 is the left side for the wearable electronic device being located in real world.If wearable electricity Sub-device port can then capture the image of another sound source in the eyeglass visual field VL6 different from eyeglass visual field VL5.
In fig. 5, it is assumed that another part of the scene of real world is captured as the real world in the VL6 of the eyeglass visual field Scene.Controller 130 can judge the relative position P4 and P5 of phone 501 and loud speaker 502 according to their sound.Such as figure Shown in 5A, phone 501 and loud speaker 502 are in the VL6 of the eyeglass visual field.
As shown in Figure 5 C, controller 130 may determine that two sound sources 501 and 502 in the VL6 of the eyeglass visual field, and controller 130, which correspondingly generate second computer, generates vision content CP6 and CP7, and as shown in Figure 5 C, position is respectively in P4 and P5 On.
In conclusion of the invention, the wearable electronic device provided, behaviour performed by computer on the electronic device The sound that neighbouring sound source generates can be received by making method and electronic system, and analysis obtains audio corresponding with the sound received Data, and correspondingly generate and show audio image corresponding with the sound received, to be regarded by being shown in eyeglass The image of audio image (computer generation vision content) or capture on wild (View-via-lens) carrys out the intensity of notification voice With relative direction/position of sound.
Each technical characteristic of embodiment described above can be combined arbitrarily, to keep description succinct, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, it is all considered to be the range of this specification record.
Several embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Range.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (16)

1. a kind of wearable electronic device, including:
One front baffle;
Side frame;
One transparent glasses lens have from the side of the transparent glasses lens and watch including a display surface, is arranged on the front baffle An eyeglass visual field;
One sound receiver array is arranged on the front baffle and the side frame, the sound for receiving peripheral region, and Multiple audio datas are generated according to the sound received;And
One controller is couple to the transparent glasses lens and the sound receiver array, for analyzing the audio data to sentence One relative position of a disconnected sound source relative to the wearable electronic device, is produced based on at least partly described relative position One vision content of raw representing said sound, and the vision content is shown on the display surface of the transparent glasses lens With Chong Die with the eyeglass visual field.
2. wearable electronic device according to claim 1, which is characterized in that the controller is further used for generating Represent the vision content of the sound with the intensity of sound rank more than an intensity threshold.
3. wearable electronic device according to claim 1, which is characterized in that the controller is further used for using One voice band filter is come the audio data caused by preventing by the sound of a wearer and influencing for a spacial aliasing.
4. wearable electronic device according to claim 1, which is characterized in that in the sound receiver array Multiple first sound receivers are distributed on the front baffle around the transparent glasses lens and the sound receiver array In multiple second sound receivers be distributed in the side frame.
5. wearable electronic device according to claim 1, which is characterized in that when the sound source is according to the opposite position It sets when being judged not in the eyeglass visual field, the controller generation can indicate that the sound and a sound of a relative direction One First look content of sound intensity rank, and include the display table in the transparent glasses lens by the First look content Face,
Wherein, when the sound source is judged according to the relative position in the eyeglass visual field, the controller generates It can indicate that one second vision content of the intensity of sound rank of the sound, and by painting second vision content Second vision content is included the institute in the transparent glasses lens by the position for making the sound source in the eyeglass visual field It states on display surface.
6. wearable electronic device according to claim 5, which is characterized in that described the first of representing said sound regards Feel that content includes:
One arrow pattern has the color and/or a size judged according to the intensity of sound rank of the sound, and And a slope of the arrow pattern judged according to the relative position,
The wherein described arrow plot case is plotted as being directed toward the side in the eyeglass visual field, indicates relative to the wearable electricity One position of the sound source of sub-device.
7. wearable electronic device according to claim 5, which is characterized in that when the sound source is judged not described When in the eyeglass visual field, the First look content of representing said sound includes:
One pie pattern, wherein being corresponding to the first part in the multiple portions of multiple and different angles of the pie pattern To correspond to the sound,
A color of the first part is wherein judged according to the intensity of sound rank of the sound,
One angle of the first part of the wherein described cake chart case is represented positioned at the sound source and the wearable electricity One between sub-device corresponds to azimuth.
8. wearable electronic device according to claim 5, which is characterized in that when the sound source is judged position described When in the eyeglass visual field, second vision content corresponding to the sound includes:
One sound source pattern has the color and/or a size judged according to the intensity of sound rank of the sound,
Wherein, the controller judges one of the sound source in the eyeglass visual field according to the relative position of the sound source Virtual coordinates,
The wherein described controller draws described second based on the virtual coordinates on the display surface of the transparent glasses lens Vision content, wherein second vision content is plotted in a position of the sound source in the eyeglass visual field.
9. the operating method performed by a kind of computer is suitable for a controller, a sound receiver array and with one One electronic device of one transparent glasses lens of display surface, including:
The sound that a sound source in one peripheral region of electronic device described in the sound receiver array received is sent out, and root Multiple audio datas are generated according to the sound received;
Multiple audio datas are analyzed to judge a phase of the sound source relative to the electronic device by the controller To position;
A vision content of representing said sound is generated based on at least partly described relative position by the controller;And
The vision content caused by representing said sound, which is plotted in the viewing from the side of the transparent glasses lens, has a mirror On the display surface of the transparent glasses lens in the piece visual field, wherein the vision content be plotted in the eyeglass visual field it On.
10. computer according to claim 9 executes operating method, which is characterized in that generate the institute of representing said sound The step of stating vision content include:The intensity of sound rank with more than an intensity threshold can be represented by being generated by the controller The sound the vision content.
11. computer according to claim 9 executes operating method, which is characterized in that generate the institute of representing said sound The step of stating vision content include:By the controller using a voice band filter come the audio data caused by preventing by To the influence of the sound and a spacial aliasing of a wearer.
12. computer according to claim 9 executes operating method, which is characterized in that when the sound source is according to the phase When being judged not in the eyeglass visual field to position, the computer for generating representing said sound generates the step of vision content Suddenly include:
Representing said sound is generated by the controller and can indicate that an intensity of sound grade of the sound and a relative direction Other First look content, wherein the vision content of representing said sound is plotted in described in the transparent glasses lens Step on display surface includes:
The First look content of the sound of the sound source not in the eyeglass visual field is plotted in described transparent On the display surface of eyeglass.
13. computer according to claim 12 executes operating method, which is characterized in that described the of representing said sound One vision content includes:One arrow pattern, have the color that is judged according to the intensity of sound rank of the sound and/ Or a size, and a slope of the arrow pattern judged according to the relative position, wherein the arrow plot case quilt It is plotted as being directed toward the side in the eyeglass visual field, there is shown a position of the sound source relative to the electronic device.
14. computer according to claim 12 executes operating method, which is characterized in that described the of representing said sound One vision content includes:One pie pattern, wherein corresponding in the multiple portions of multiple and different angles of the pie pattern One first part be to correspond to the sound,
A color of the first part is wherein judged according to the intensity of sound rank of the sound by the controller,
One angle of the first part of the wherein described cake chart case is indicated positioned at the sound source and the wearable electricity One between sub-device corresponds to azimuth.
15. computer according to claim 9 executes operating method, which is characterized in that according to analysis result, when the sound When source is judged position in the eyeglass visual field, the step of vision content for generating representing said sound, includes:
It is generated by the controller and represents one second vision content for being judged the sound in the eyeglass visual field, In, the vision content caused by the representing said sound of the generation is plotted in the display of the transparent glasses lens Step on surface includes:
By the institute on the display surface of the transparent glasses lens of a position of the sound source of the position in the eyeglass visual field The second vision content is stated to be drawn.
16. computer according to claim 15 executes operating method, which is characterized in that described the of representing said sound Two vision contents include the sound source pattern, have the color judged according to the intensity of sound rank of the sound And/or a size, wherein the step of vision content caused by representing said sound on the transparent glasses lens is drawn Including:
By the controller according to the relative position of the sound source, the correspondence sound source in the eyeglass visual field is judged One virtual coordinates;And
The sound source pattern on the display surface of the transparent glasses lens is plotted in the described virtual of the transparent glasses lens Coordinate, wherein second vision content is Chong Die with the sound source in the eyeglass visual field.
CN201810241438.7A 2018-03-22 2018-03-22 Wearable electronic device and its operating method for audio imaging Withdrawn CN108594988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810241438.7A CN108594988A (en) 2018-03-22 2018-03-22 Wearable electronic device and its operating method for audio imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810241438.7A CN108594988A (en) 2018-03-22 2018-03-22 Wearable electronic device and its operating method for audio imaging

Publications (1)

Publication Number Publication Date
CN108594988A true CN108594988A (en) 2018-09-28

Family

ID=63627183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810241438.7A Withdrawn CN108594988A (en) 2018-03-22 2018-03-22 Wearable electronic device and its operating method for audio imaging

Country Status (1)

Country Link
CN (1) CN108594988A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597481A (en) * 2018-11-16 2019-04-09 Oppo广东移动通信有限公司 AR virtual portrait method for drafting, device, mobile terminal and storage medium
CN112712817A (en) * 2020-12-24 2021-04-27 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101213761A (en) * 2005-06-24 2008-07-02 汤姆森特许公司 Multipath searcher results sorting method
CN103946733A (en) * 2011-11-14 2014-07-23 谷歌公司 Displaying sound indications on a wearable computing system
CN105223551A (en) * 2015-10-12 2016-01-06 吉林大学 A kind of wearable auditory localization tracker and method
CN107015655A (en) * 2017-04-11 2017-08-04 苏州和云观博数字科技有限公司 Museum virtual scene AR experiences eyeglass device and its implementation
CN107367839A (en) * 2016-05-11 2017-11-21 宏达国际电子股份有限公司 Wearable electronic installation, virtual reality system and control method
CN104869524B (en) * 2014-02-26 2018-02-16 腾讯科技(深圳)有限公司 Sound processing method and device in three-dimensional virtual scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101213761A (en) * 2005-06-24 2008-07-02 汤姆森特许公司 Multipath searcher results sorting method
CN103946733A (en) * 2011-11-14 2014-07-23 谷歌公司 Displaying sound indications on a wearable computing system
CN104869524B (en) * 2014-02-26 2018-02-16 腾讯科技(深圳)有限公司 Sound processing method and device in three-dimensional virtual scene
CN105223551A (en) * 2015-10-12 2016-01-06 吉林大学 A kind of wearable auditory localization tracker and method
CN107367839A (en) * 2016-05-11 2017-11-21 宏达国际电子股份有限公司 Wearable electronic installation, virtual reality system and control method
CN107015655A (en) * 2017-04-11 2017-08-04 苏州和云观博数字科技有限公司 Museum virtual scene AR experiences eyeglass device and its implementation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597481A (en) * 2018-11-16 2019-04-09 Oppo广东移动通信有限公司 AR virtual portrait method for drafting, device, mobile terminal and storage medium
WO2020098462A1 (en) * 2018-11-16 2020-05-22 Oppo广东移动通信有限公司 Ar virtual character drawing method and apparatus, mobile terminal and storage medium
CN109597481B (en) * 2018-11-16 2021-05-04 Oppo广东移动通信有限公司 AR virtual character drawing method and device, mobile terminal and storage medium
CN112712817A (en) * 2020-12-24 2021-04-27 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium
CN112712817B (en) * 2020-12-24 2024-04-09 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium

Similar Documents

Publication Publication Date Title
US12008153B2 (en) Interactive augmented reality experiences using positional tracking
EP4172726A1 (en) Augmented reality experiences using speech and text captions
US11302077B2 (en) Augmented reality guidance that generates guidance markers
US10088868B1 (en) Portable electronic device for acustic imaging and operating method for the same
US20210405772A1 (en) Augmented reality eyewear 3d painting
US11740313B2 (en) Augmented reality precision tracking and display
US11741679B2 (en) Augmented reality environment enhancement
US11195341B1 (en) Augmented reality eyewear with 3D costumes
WO2022005698A1 (en) Visual-inertial tracking using rolling shutter cameras
US11803239B2 (en) Eyewear with shared gaze-responsive viewing
US20210406542A1 (en) Augmented reality eyewear with mood sharing
CN116324679A (en) Contextually relevant eye-wear remote control
CN116324579A (en) Augmented reality game using virtual eye-wear beams
CN108594988A (en) Wearable electronic device and its operating method for audio imaging
US12079945B2 (en) XR preferred movement along planes
US12072406B2 (en) Augmented reality precision tracking and display
WO2024049578A1 (en) Scissor hand gesture for a collaborative object
CN111445439B (en) Image analysis method, device, electronic equipment and medium
US20240045491A1 (en) Medical image overlays for augmented reality experiences
WO2024025779A1 (en) Magnified overlays correlated with virtual markers
CN116685934A (en) Augmented reality accurate tracking and display
WO2023101802A1 (en) Eyewear with direction of sound arrival detection
WO2022235729A1 (en) Curated contextual overlays for augmented reality experiences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180928

WW01 Invention patent application withdrawn after publication