WO2019221506A1 - 바이노럴 사운드를 이용한 감성 통화 방법 및 이를 위한 장치 - Google Patents
바이노럴 사운드를 이용한 감성 통화 방법 및 이를 위한 장치 Download PDFInfo
- Publication number
- WO2019221506A1 WO2019221506A1 PCT/KR2019/005823 KR2019005823W WO2019221506A1 WO 2019221506 A1 WO2019221506 A1 WO 2019221506A1 KR 2019005823 W KR2019005823 W KR 2019005823W WO 2019221506 A1 WO2019221506 A1 WO 2019221506A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- call
- emotional
- voice
- user
- sound
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/725—Cordless telephones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/34—Microprocessors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/52—Details of telephonic subscriber devices including functional features of a camera
Definitions
- the present invention relates to an emotional call method using a binaural sound and a device therefor (METHOD FOR EMOTIONAL CALLING USING BINAURAL SOUND AND APPARATUS THEREOF).
- a binaural sound and a device therefor Method FOR EMOTIONAL CALLING USING BINAURAL SOUND AND APPARATUS THEREOF.
- the position of the caller's voice can be set differently according to the situation to reduce the cognitive load and reduce the energy required for the user to talk. It is about technology.
- the video call is basically a method of talking while looking at the other party's appearance, the user can make a call while looking at the display window of the mobile device.
- the user may make a video call with the mobile device in his or her right hand or left hand, or mount it on a separate holder that can fix the mobile device, depending on the situation.
- the voice of the caller always hears a certain feeling, so that the user may feel heterogeneous in the relationship between the position and the sound. This situation can also add fatigue to the call because it creates a cognitive load on the user.
- An object of the present invention is to provide a call technology that generates a sentimental effect by combining psychological elements with stereophonic technology using binaural sound.
- an object of the present invention is to reduce the user's fatigue by reducing the cognitive load required for a user to make a voice call or a video call.
- the emotional call method for reducing the cognitive load for the call comprising the steps of: confirming the emotional sound settings; And changing the voice position of the counterpart according to the emotional sound setting, and providing the counterpart voice as a binaural sound corresponding to the changed voice position.
- the emotional sound setting may be activated when the earphone is used, and may be deactivated when the earphone is not used.
- the step of confirming the emotional sound setting may check the relative position of the call terminal with respect to the face of the user who performs the call, the relative position can be confirmed as the voice position of the other party.
- the checking of the emotional sound setting may include detecting at least two or more positional relationships among eyes, noses, mouths, and ears included in the face of the user based on a camera provided in the call terminal, and considering the positional relationship. And calculating an angle corresponding to the relative position based on the detected front face direction.
- the step of confirming the emotional sound setting may determine whether or not to pre-set the counterpart based on the phone number, and if the pre-set is set to determine the voice position of the counterpart corresponding to the pre-set.
- the position corresponding to the right ear of the user may be confirmed as the voice position of the counterpart.
- the position corresponding to the back of the user's head may be confirmed as the voice position of the counterpart.
- the designated position may be identified as the voice position of the counterpart in consideration of the number of counterparts participating in the conference call.
- the designated position may be set based on a value obtained by dividing an angle of a preset range based on the line of sight of the user corresponding to the number of relative persons.
- the emotional call method may further include providing a location designation interface that can designate the voice location of the counterpart in consideration of the number of opponents.
- the emotional call method if the emotional call end condition is satisfied during the emotional call, deactivating the emotional sound setting; And when the emotional call execution condition is satisfied during the general call, activating the emotional sound setting.
- the emotional call end condition is satisfied when the use of the earphone is terminated during the emotional call and when the emotional call end command based on a user input is input during the emotional call, and the emotional call execution condition is satisfied. May be satisfied when at least one of the case where the use of the earphone is executed during the general call and the case where the emotional call execution command based on the user input is input during the general call.
- the emotional call method may be applicable to at least one of a voice call and a video call.
- the emotional call application stored in the computer-readable recording medium according to an embodiment of the present invention, to reduce the cognitive load for the call, the step of confirming the emotional sound settings; And changing the voice position of the counterpart according to the emotional sound setting, and providing the counterpart voice as a binaural sound corresponding to the changed voice position.
- the emotional sound setting may be activated when the earphone is used, and may be deactivated when the earphone is not used.
- the step of confirming the emotional sound setting may check the relative position of the call terminal with respect to the face of the user who performs the call, the relative position can be confirmed as the voice position of the other party.
- the checking of the emotional sound setting may include detecting at least two or more positional relationships among eyes, noses, mouths, and ears included in the face of the user based on a camera provided in the call terminal, and considering the positional relationship. And calculating an angle corresponding to the relative position based on the detected front face direction.
- the step of confirming the emotional sound setting may determine whether or not to pre-set the counterpart based on the phone number, and if the pre-set is set to determine the voice position of the counterpart corresponding to the pre-set.
- the position corresponding to the right ear of the user may be confirmed as the voice position of the counterpart.
- the position corresponding to the back of the user's head may be confirmed as the voice position of the counterpart.
- the designated position may be identified as the voice position of the counterpart in consideration of the number of counterparts participating in the conference call.
- the designated position may be set based on a value obtained by dividing an angle of a preset range based on the line of sight of the user corresponding to the number of relative persons.
- the emotional call application may further execute the step of providing a location interface for designating the voice location of the counterpart in consideration of the number of opponents.
- the emotional call application if the emotional call end condition is satisfied during the emotional call, deactivating the emotional sound setting; And if the emotional call execution condition is satisfied during the general call, activating the emotional sound setting.
- the emotional call end condition is satisfied when the use of the earphone is terminated during the emotional call and when the emotional call end command based on a user input is input during the emotional call, and the emotional call execution condition is satisfied. May be satisfied when at least one of the case where the use of the earphone is executed during the general call and the case where the emotional call execution command based on the user input is input during the general call.
- the emotional call application may be applicable to at least one of a voice call and a video call.
- the present invention can reduce the fatigue that the user feels by reducing the cognitive load required for the user (voice) for the voice call or video call.
- the present invention can provide a personalized call environment for the user by identifying the call counterpart and setting the emotional sound.
- the present invention can improve efficiency in work by effectively recognizing voices for each participant in a conference call.
- FIG. 1 is a flowchart illustrating an emotional call method using binaural sound according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an example of an emotional sound setting based on a relative position of a call terminal with respect to a face of a user according to the present invention.
- FIG. 3 is a view showing an example of detecting the positional relationship of eyes, nose, mouth, ear according to the present invention.
- FIG. 4 is a diagram illustrating an example of calculating an angle of a relative position according to the present invention.
- 5 to 6 are diagrams showing an example of calculating the distance of the relative position according to the present invention.
- 10 to 12 are diagrams showing an example of an emotional sound setting for a conference call according to the present invention.
- 13 to 16 illustrate an example of a positioning interface according to the present invention.
- 17 is a diagram illustrating a process of using a relative position of a call terminal with respect to a face of a user in an emotional call method according to an embodiment of the present invention.
- FIG. 18 is a diagram illustrating a process of using pre-setting in the emotional call method according to an embodiment of the present invention in detail.
- FIG. 19 is a diagram illustrating a process of using a conference call in detail in an emotional call method according to an embodiment of the present invention.
- 20 is a detailed flowchart illustrating a process of activating or deactivating emotional sound settings during a call in an emotional call method according to an embodiment of the present invention.
- 21 is a block diagram showing a call terminal running the emotional call application according to an embodiment of the present invention.
- FIG. 1 is a flowchart illustrating an emotional call method using binaural sound according to an exemplary embodiment of the present invention.
- the emotional call method using the binaural sound checks the emotional sound settings (S110).
- the emotional call according to the present invention may correspond to a call method for reducing the cognitive load that may occur when the user makes a call using the emotional sound.
- the emotional sound corresponds to the binaural sound to which the binaural effect is applied, and the sound actually output to the user may correspond to the emotional sound.
- Emotional call reduces the cognitive load through stereoscopic effect based on binaural sound to reduce the fatigue feeling when the user makes a call or to provide the user with a three-dimensional call experience by providing an emotional element that cannot be felt in the general call. You can also provide
- a service may be provided in consideration of the use of an audio device that can feel the binaural effect such as an earphone or a headphone.
- the emotional sound setting may be activated when the earphone is used, and deactivated when the earphone is not used.
- the earphone or headphone for activating the emotional sound setting may include wireless as well as a wired Bluetooth headset or a Bluetooth headphone.
- the emotional sound setting is described as the use of the earphone.
- the activation or deactivation of the emotional sound setting may be performed according to the use of the headphones similar to the earphone.
- the relative position of the call terminal with respect to the face of the user performing the call can be checked, and the relative position can be confirmed as the voice position of the counterpart.
- the voice location of the other party may be confirmed by checking the relative position of the call terminal 220 with respect to the face of the user 210 who performs the call. That is, the user 210 may perform the emotional sound setting so that the user 210 may feel as if the other party's voice is heard at the location of the call terminal 220.
- the position of the position based on the detected front direction in consideration of the positional relationship can be calculated.
- eyes, nose, mouth, and ears may be detected as shown in FIG. 3. .
- the frontal direction of the face may be detected by detecting which direction the face of the user 210 faces through the positional relationship with respect to the detected eyes, nose, mouth, and ears.
- the face front direction 411 of the user 410 and the call terminal for the call terminal 420 are illustrated.
- the relative position angle 430 between the front direction 421 may be calculated.
- the user 410 and the call terminal 420 are called with their right hand. At this time, if the user 410 speaks with the call terminal 420 in the face front direction 411, there is no difference in angle between the call terminal front direction 421 and the face front direction 411. Emotional sound settings can be performed to hear.
- the terminal 420 may provide a stereoscopic call experience to the user 410 by performing emotional sound setting in the same manner.
- a face image of the user may be obtained based on a camera provided in the call terminal, and a distance corresponding to the relative position may be calculated in consideration of the image size of the face image.
- the size of the face image 520 obtained through the camera of the call terminal in the call state 510 as shown in FIG. 5 is the call state 610 as shown in FIG. 6. It can be seen that the face is larger than the face image 620 obtained through the camera of the call terminal. That is, the face image 520 of FIG. 5 is larger than the face image 620 of FIG. 6, so that the relative position distance 511 of the call state 510 of FIG. 5 is greater than that of the call state 610 of FIG. 6. It may mean that it is shorter than the relative position distance 611.
- the emotional sound setting may be performed by calculating a distance corresponding to the relative position based on the image photographed by the call terminal in this manner.
- the pre-setting corresponds to information that can be set according to who is the call counterpart, and a separate interface may be provided so that the user of the call terminal may be set based on the phone number stored in the call terminal.
- the pre-setting when the pre-setting is in the whisper mode, the position corresponding to the right ear of the user can be confirmed as the voice position of the other party.
- pre-set to operate in the whisper mode when talking to a lover and the position of the lover's voice position 730 near the right ear of the user 710 as shown in FIG. It can be confirmed that. That is, when pre-setting is performed, the voice position may be determined according to a mode set to pre-setting regardless of the position of the call terminal 720.
- the user's voice may be more easily recognized by setting the voice position of the other party to the right ear instead of the left ear.
- This setting takes into account the sound recognition path of the cerebrum, which will be described below with reference to the drawing shown in FIG. 8.
- the sound coming into the right ear 810 is transmitted to the left cerebral primary auditory cortex via the right cochlear 811 nerve and the sagittal medial humerus, and the sound coming into the left ear 820 is the nerve and thalamus of the left cochlear 821.
- Via the medial knee body can be delivered to the right brain primary auditory cortex. That is, as can be seen in the nerve connecting portion 800, the right cochlea 811 nerve is connected to the left brain, the left cochlear 821 nerve is connected to the right brain, so the sound coming into the right ear 810 is transmitted to the left brain.
- the sound coming into the left ear 820 may be delivered to the right brain.
- the Wernicke region which corresponds to the listening region in the brain, is the backbone of the linguistic nerve located in the temporal lobe of the left brain, and receives and receives information from the primary auditory cortex.
- the primary auditory cortex processes only simple hearing without considering the meaning of language, and later processes the listening as a language having meaning in the Bernike region. In the end, processing in the Wernicke region may be necessary.
- the sound coming into the left ear 820 is first transmitted to the primary auditory cortex of the right brain where the Wernicke region is not located, one more path is required when comparing the paths of the sound coming into the right ear 810. can do. That is, the sound coming into the right ear 810 is directly transmitted from the left brain primary auditory cortex to the Wernicke region located in the left brain, while the sound coming into the left ear 820 is located in the right brain primary auditory cortex.
- the path through the right ear 810 may be shorter than the path through the left ear 820 because it must be delivered to the area.
- the user 710 may be less tired to recognize the voice and may recognize the voice more easily by providing a sound to the right ear of the user 710 as shown in FIG. 7.
- the pre-setting when the pre-setting is in the nagging mode, the position corresponding to the back of the user's head can be confirmed as the voice position of the other party.
- the emotional sound setting may be deactivated to perform a general call instead of an emotional call.
- the designated position may be identified as the voice position of the counterpart in consideration of the number of counterparts participating in the conference call.
- the designated position may be set based on a value obtained by dividing an angle of a predetermined range based on the line of sight of the user corresponding to the number of relative persons.
- the area is divided by dividing 180 degrees based on the front of the user 1000 by 2 corresponding to the number of people, and the designated positions 1010 and 1020 at the intermediate angle points of the divided areas. You can set the voice position of the other party by setting.
- the angle of the preset range is similarly 180 degrees.
- the area is divided by dividing 180 degrees based on the front of the user 1100 by 3 corresponding to the number of people, and the designated positions 1110 and 1120 at the intermediate angle points of the divided area. 1130 may set the voice position of the other party.
- the area is divided by dividing 360 degrees based on the user 1200 by 4 corresponding to the number of relative persons, and the designated positions 1210, 1220, 1230 at the intermediate angle points of the divided areas. 1240 may be set to set the voice position of the other party.
- the method of setting the designated position is not limited to the method shown in Figs.
- the emotional call method using the binaural sound can provide a positioning interface that can specify the voice position of the other party in consideration of the number of opponents. .
- the call terminal of the user may provide the positioning interface shown in FIG. 13. If it is assumed that there is a participant of A, B, C, and D except the user, an interface screen for selecting the location of the participant may be sequentially provided as shown in FIGS. 13 to 15.
- the Modify Location button 1610 for modifying the designated location of the participants as shown in FIG. 16 and the start of the conference call to start the conference call to the currently specified location. Button 1620 may be provided.
- the positioning interface may be provided before the conference call starts or during the conference call, and the form thereof may not be limited to FIGS. 13 to 16.
- the emotional call method using the binaural sound changes the position of the other party's voice according to the emotional sound setting, and provides the other party's voice as a binaural sound corresponding to the changed voice position ( S120).
- the binaural sound may be generated by applying the existing technology and the technology that can be developed in the future.
- Korean Patent Publication No. 10-1599554 discloses a method of outputting a 3D binaural signal based on an international standard multichannel audio encoding technology called MPEG Surround. 10-1599554 extracts multi-channel audio playback parameters based on the MPEG-Surround (MPS) international standard and performs HRD (Head Related Transfer function) filtering on downmix audio signals using the audio playback parameters. Disclosed is a content for outputting a binaural signal.
- HRTF filtering may be filtering to obtain the impulse response of the left and right sides for each position at a specific interval between the azimuth angle of 360 degrees and the altitude angle of 180 degrees using a dummy header microphone modeling a human auditory organ.
- the multi-channel audio reproduction characteristic parameter is related to the output level difference of the front and rear channel signals for each frequency band. It may be extracted based on the spatial parameter expressed by the degree of correlation.
- Korean Patent Publication No. 10-0971700 filters left and right audio signals in a frequency domain based on the location information of a virtual sound source and binaural filter coefficients for each channel, and decodes the filtered signal into a binaural stereo signal.
- the contents are disclosed.
- the stereo left / right audio signal of the input time domain is converted into a signal of the frequency domain using a Discrete Fourier Transform (DFT) or a Fast Fourier Transform (FFT), and the sub is allocated based on the location information of the virtual sound source.
- DFT Discrete Fourier Transform
- FFT Fast Fourier Transform
- a stereo left / right signal corresponding to the frequency domain may be filtered as a binaural stereo signal based on a power gain value of each channel per band and a left / right HRTF coefficient block in the frequency domain for each channel.
- the spatial gain of each channel can be calculated by synthesizing the spatial cue information based on the virtual source location information (VSLI) of the virtual sound source, and the VSLI-based spatial cue information for the stereo signal For any subband (m), Left Half-plane Angle (LHA) (LHA (m)), Left Subsequent Angle (LSA) (LSA (m)), Right Half-Angle ( RHA: Right Half-plane Angle (RHA (m)) and Right Subsequent Angle (RSA (RSA (m)).
- LHA Left Half-plane Angle
- LSA Left Subsequent Angle
- RHA Right Half-Angle
- RHA Right Half-plane Angle
- RSA Right Subsequent Angle
- the present invention can also generate a binaural sound corresponding to the voice position of the counterpart based on the emotional sound setting based on the above technique.
- the emotional call method using the binaural sound if the emotional call end condition is satisfied during the emotional call, the emotional sound setting can be deactivated.
- the emotional call termination condition may be satisfied when at least one of the case where the use of the earphone is terminated during the emotional call and the case where the emotional call termination command based on the user input is input during the emotional call.
- the emotional call method using the binaural sound if the emotional call execution condition is satisfied during the general call, the emotional sound setting can be activated.
- the emotional call execution condition may be satisfied when at least one of the case where the use of the earphone is executed during the general call and the case when the emotional call execution command based on the user input is input during the general call.
- the emotional call method may be applicable to at least one of a voice call and a video call. That is, the emotional call method according to the present invention can be applied to a video call showing an image together with a voice call or a voice in which only a voice is heard.
- the emotional call method using the binaural sound according to an embodiment of the present invention as described above separates the various information generated in the emotional call process according to an embodiment of the present invention Can be stored in the storage module.
- the user may reduce fatigue, and personalized call environment for the user by identifying emotional parties and setting emotional sound. It can also provide
- 17 is a diagram illustrating a process of using a relative position of a call terminal with respect to a face of a user in an emotional call method according to an embodiment of the present invention.
- a process of using a relative position of a call terminal with respect to a face of a user in the emotional call method may be performed when a call is first connected between the user terminal 1710 and the counterpart terminal 1720 (In operation S1702, the user may perform face recognition using the camera equipped with the user terminal 1710 (S1704).
- the user terminal 1710 may assume that the emotional sound setting is activated by using the earphone according to the present invention.
- the relative position of the user terminal 1710 with respect to the user's face may be checked (S1706) to set the emotional sound, and the voice position of the other party may be confirmed.
- the voice location of the voice data may be changed according to the voice location identified in the emotional sound setting step (S1710).
- the binaural sound may be generated and output to the user according to the changed voice position (S1712).
- FIG. 18 is a diagram illustrating a process of using pre-setting in the emotional call method according to an embodiment of the present invention in detail.
- the process of using the pre-setting of the emotional call method according to an embodiment of the present invention is performed by pre-setting at the user terminal 1810 (S1802), for each phone number stored in the user terminal 1810. Each emotional call mode can be set.
- the user terminal 1810 may check the phone number of the counterpart terminal 1820 to determine whether to pre-set (S1806). ).
- the user terminal 1810 may assume that the emotional sound setting is activated by using the earphone according to the present invention.
- the pre-setting mode may include a quick step mode for checking a position corresponding to the right ear of the user as the voice position of the counterpart and a nagging mode for checking a position corresponding to the back of the user's head as the voice position of the counterpart.
- the voice position of the voice data may be changed to the voice position corresponding to the pre-setting mode (S1814).
- the binaural sound may be generated and output to the user according to the changed voice position (S1816).
- the call may be performed using a general call sound.
- FIG. 19 is a diagram illustrating a process of using a conference call in detail in an emotional call method according to an embodiment of the present invention.
- a process of using a conference call in an emotional call method may include a conference call with counterpart terminals 1920-1 to 1920 -N, including a user terminal 1910. Once started (S1902), first, the user terminal 1910 can determine the number of opponents (S1904).
- the user terminal 1910 may assume that the emotional sound setting is activated by using the earphone according to the present invention.
- the user terminal 1910 may designate the locations of the counterpart terminals based on the number of participants excluding the user from the counterpart number (S1906).
- the location of the other party's voice may be changed to the location of the other party's 1 corresponding to the location specified in step S1906 (S1910). ).
- the binaural sound may be generated and output to the user according to the changed voice position (S1912).
- step S1908 the process of step S1912 may be applied to the other counterpart terminals in the same manner (S1914 to S1918), so that voices of various parties participating in the conference call may be output corresponding to the binaural sound. .
- 20 is a detailed flowchart illustrating a process of activating or deactivating emotional sound settings during a call in an emotional call method according to an embodiment of the present invention.
- the emotional call termination condition is satisfied. It may be determined whether or not (S2015).
- the emotional call termination condition may be satisfied when at least one of the case where the use of the earphone is terminated during the emotional call and the case where the emotional call termination command based on the user input is input during the emotional call.
- the call may be performed by deactivating the emotional sound setting and changing to the general call (S2020).
- the call process may be terminated.
- step S2015 it may be determined whether the call is terminated (S2025).
- the call may be performed by activating the emotional sound setting again and changing to the emotional call (S2050).
- the emotional call execution condition may be satisfied when at least one of the case where the use of the earphone is executed during the general call and the case when the emotional call execution command based on the user input is input during the general call.
- step S2055 After determining whether the call is ended (S2055), if the call is not ended, it may be determined whether the emotional call termination condition is satisfied again in step S2015.
- the call process may be terminated.
- step S2045 it may be determined whether or not the call has ended corresponding to step S2035.
- 21 is a block diagram showing a call terminal running the emotional call application according to an embodiment of the present invention.
- a call terminal running an emotional call application includes a communication unit 2110, a processor 2120, and a memory 2130.
- the communication unit 2110 may receive an emotional call application through a communication network such as a network, or may connect a video call or a voice call with a counterpart communication terminal. That is, the communication unit 2110 according to an embodiment of the present invention may receive the other party's voice data and transmit the received voice data to the processor 2120 or the memory 2130 that performs an operation for emotional call.
- a communication network such as a network
- the communication unit 2110 may receive the other party's voice data and transmit the received voice data to the processor 2120 or the memory 2130 that performs an operation for emotional call.
- the processor 2120 corresponds to a central processing unit, and may be controlled by executing an emotional call application according to an embodiment of the present invention stored in the memory 2130 through the communication unit 2110 or another path.
- the processor 2120 verifies the emotional sound setting.
- the emotional call according to the present invention may correspond to a call method for reducing the cognitive load that may occur when the user makes a call using the emotional sound.
- the emotional sound corresponds to the binaural sound to which the binaural effect is applied, and the sound actually output to the user may correspond to the emotional sound.
- Emotional call reduces the cognitive load through stereoscopic effect based on binaural sound to reduce the fatigue feeling when the user makes a call or to provide the user with a three-dimensional call experience by providing an emotional element that cannot be felt in the general call. You can also provide
- a service may be provided in consideration of the use of an audio device that can feel the binaural effect such as an earphone or a headphone.
- the emotional sound setting may be activated when the earphone is used, and deactivated when the earphone is not used.
- the earphone or headphone for activating the emotional sound setting may include wireless as well as a wired Bluetooth headset or a Bluetooth headphone.
- the emotional sound setting is described as the use of the earphone.
- the activation or deactivation of the emotional sound setting may be performed according to the use of the headphones similar to the earphone.
- the relative position of the call terminal with respect to the face of the user performing the call can be checked, and the relative position can be confirmed as the voice position of the counterpart.
- the voice location of the other party may be confirmed by checking the relative position of the call terminal 220 with respect to the face of the user 210 who performs the call. That is, the user 210 may perform the emotional sound setting so that the user 210 may feel as if the other party's voice is heard at the location of the call terminal 220.
- the position of the position based on the detected front direction in consideration of the positional relationship can be calculated.
- eyes, nose, mouth, and ears may be detected as shown in FIG. 3. .
- the frontal direction of the face may be detected by detecting which direction the face of the user 210 faces through the positional relationship with respect to the detected eyes, nose, mouth, and ears.
- the face front direction 411 of the user 410 and the call terminal for the call terminal 420 are illustrated.
- the relative position angle 430 between the front direction 421 may be calculated.
- the user 410 and the call terminal 420 are called with their right hand. At this time, if the user 410 speaks with the call terminal 420 in the face front direction 411, there is no difference in angle between the call terminal front direction 421 and the face front direction 411. Emotional sound settings can be performed to hear.
- the terminal 420 may provide a stereoscopic call experience to the user 410 by performing emotional sound setting in the same manner.
- a face image of the user may be obtained based on a camera provided in the call terminal, and a distance corresponding to the relative position may be calculated in consideration of the image size of the face image.
- the size of the face image 520 obtained through the camera of the call terminal in the call state 510 as shown in FIG. 5 is the call state 610 as shown in FIG. 6. It can be seen that the face is larger than the face image 620 obtained through the camera of the call terminal. That is, the face image 520 of FIG. 5 is larger than the face image 620 of FIG. 6, so that the relative position distance 511 of the call state 510 of FIG. 5 is greater than that of the call state 610 of FIG. 6. It may mean that it is shorter than the relative position distance 611.
- the emotional sound setting may be performed by calculating a distance corresponding to the relative position based on the image photographed by the call terminal in this manner.
- the pre-setting corresponds to information that can be set according to who is the call counterpart, and a separate interface may be provided so that the user of the call terminal may be set based on the phone number stored in the call terminal.
- the pre-setting when the pre-setting is in the whisper mode, the position corresponding to the right ear of the user can be confirmed as the voice position of the other party.
- pre-set to operate in the whisper mode when talking to a lover and the position of the lover's voice position 730 near the right ear of the user 710 as shown in FIG. It can be confirmed that. That is, when pre-setting is performed, the voice position may be determined according to a mode set to pre-setting regardless of the position of the call terminal 720.
- the user's voice may be more easily recognized by setting the voice position of the other party to the right ear instead of the left ear.
- This setting takes into account the sound recognition path of the cerebrum, which will be described below with reference to the drawing shown in FIG. 8.
- the sound coming into the right ear 810 is transmitted to the left cerebral primary auditory cortex via the right cochlear 811 nerve and the sagittal medial humerus, and the sound coming into the left ear 820 is the nerve and thalamus of the left cochlear 821.
- Via the medial knee body can be delivered to the right brain primary auditory cortex. That is, as can be seen in the nerve connecting portion 800, the right cochlea 811 nerve is connected to the left brain, the left cochlear 821 nerve is connected to the right brain, so the sound coming into the right ear 810 is transmitted to the left brain.
- the sound coming into the left ear 820 may be delivered to the right brain.
- the Wernicke region which corresponds to the listening region in the brain, is the backbone of the linguistic nerve located in the temporal lobe of the left brain, and receives and receives information from the primary auditory cortex.
- the primary auditory cortex processes only simple hearing without considering the meaning of language, and later processes the listening as a language having meaning in the Bernike region. In the end, processing in the Wernicke region may be necessary.
- the sound coming into the left ear 820 is first transmitted to the primary auditory cortex of the right brain where the Wernicke region is not located, one more path is required when comparing the paths of the sound coming into the right ear 810. can do. That is, the sound coming into the right ear 810 is directly transmitted from the left brain primary auditory cortex to the Wernicke region located in the left brain, while the sound coming into the left ear 820 is located in the right brain primary auditory cortex.
- the path through the right ear 810 may be shorter than the path through the left ear 820 because it must be delivered to the area.
- the user 710 may be less tired to recognize the voice and may recognize the voice more easily by providing a sound to the right ear of the user 710 as shown in FIG. 7.
- the pre-setting when the pre-setting is in the nagging mode, the position corresponding to the back of the user's head can be confirmed as the voice position of the other party.
- the emotional sound setting may be deactivated to perform a general call instead of an emotional call.
- the designated position may be identified as the voice position of the counterpart in consideration of the number of counterparts participating in the conference call.
- the designated position may be set based on a value obtained by dividing an angle of a predetermined range based on the line of sight of the user corresponding to the number of relative persons.
- the area is divided by dividing 180 degrees based on the front of the user 1000 by 2 corresponding to the number of people, and the designated positions 1010 and 1020 at the intermediate angle points of the divided areas. You can set the voice position of the other party by setting.
- the angle of the preset range is similarly 180 degrees.
- the area is divided by dividing 180 degrees based on the front of the user 1100 by 3 corresponding to the number of people, and the designated positions 1110 and 1120 at the intermediate angle points of the divided area. 1130 may set the voice position of the other party.
- the area is divided by dividing 360 degrees based on the user 1200 by 4 corresponding to the number of relative persons, and the designated positions 1210, 1220, 1230 at the intermediate angle points of the divided areas. 1240 may be set to set the voice position of the other party.
- the method of setting the designated position is not limited to the method shown in Figs.
- the processor 2120 may provide a location designation interface capable of designating the location of the other party's voice in consideration of the number of opponents.
- the call terminal of the user may provide the positioning interface shown in FIG. 13. If it is assumed that there is a participant of A, B, C, and D except the user, an interface screen for selecting the location of the participant may be sequentially provided as shown in FIGS. 13 to 15.
- the Modify Location button 1610 for modifying the designated location of the participants as shown in FIG. 16 and the start of the conference call to start the conference call to the currently specified location. Button 1620 may be provided.
- the positioning interface may be provided before the conference call starts or during the conference call, and the form thereof may not be limited to FIGS. 13 to 16.
- the processor 2120 changes the voice position of the counterpart according to the emotional sound setting, and provides the counterpart voice as a binaural sound corresponding to the changed voice position.
- the binaural sound may be generated by applying the existing technology and the technology that can be developed in the future.
- Korean Patent Publication No. 10-1599554 discloses a method of outputting a 3D binaural signal based on an international standard multichannel audio encoding technology called MPEG Surround. 10-1599554 extracts multi-channel audio playback parameters based on the MPEG-Surround (MPS) international standard and performs HRD (Head Related Transfer function) filtering on downmix audio signals using the audio playback parameters. Disclosed is a content for outputting a binaural signal.
- HRTF filtering may be filtering to obtain the impulse response of the left and right sides for each position at a specific interval between the azimuth angle of 360 degrees and the altitude angle of 180 degrees using a dummy header microphone modeling a human auditory organ.
- the multi-channel audio reproduction characteristic parameter is related to the output level difference of the front and rear channel signals for each frequency band. It may be extracted based on the spatial parameter expressed by the degree of correlation.
- Korean Patent Publication No. 10-0971700 filters left and right audio signals in a frequency domain based on the location information of a virtual sound source and binaural filter coefficients for each channel, and decodes the filtered signal into a binaural stereo signal.
- the contents are disclosed.
- the stereo left / right audio signal of the input time domain is converted into a signal of the frequency domain using a Discrete Fourier Transform (DFT) or a Fast Fourier Transform (FFT), and the sub is allocated based on the location information of the virtual sound source.
- DFT Discrete Fourier Transform
- FFT Fast Fourier Transform
- a stereo left / right signal corresponding to the frequency domain may be filtered as a binaural stereo signal based on a power gain value of each channel per band and a left / right HRTF coefficient block in the frequency domain for each channel.
- the spatial gain of each channel can be calculated by synthesizing the spatial cue information based on the virtual source location information (VSLI) of the virtual sound source, and the VSLI-based spatial cue information for the stereo signal For any subband (m), Left Half-plane Angle (LHA) (LHA (m)), Left Subsequent Angle (LSA) (LSA (m)), Right Half-Angle ( RHA: Right Half-plane Angle (RHA (m)) and Right Subsequent Angle (RSA (RSA (m)).
- LHA Left Half-plane Angle
- LSA Left Subsequent Angle
- RHA Right Half-Angle
- RHA Right Half-plane Angle
- RSA Right Subsequent Angle
- the present invention can also generate a binaural sound corresponding to the voice position of the counterpart based on the emotional sound setting based on the above technique.
- the processor 2120 may deactivate the emotional sound setting.
- the emotional call termination condition may be satisfied when at least one of the case where the use of the earphone is terminated during the emotional call and the case where the emotional call termination command based on the user input is input during the emotional call.
- the processor 2120 may activate the emotional sound setting when the emotional call execution condition is satisfied during the general call.
- the emotional call execution condition may be satisfied when at least one of the case where the use of the earphone is executed during the general call and the case when the emotional call execution command based on the user input is input during the general call.
- the emotional call method may be applicable to at least one of a voice call and a video call. That is, the emotional call method according to the present invention can be applied to a video call showing an image together with a voice call or a voice in which only a voice is heard.
- the memory 2130 may store various applications including an emotional call application along with an operating system (OS) for the call terminal. Therefore, the emotional call application may correspond to a computer program installed and executed in the mobile terminal.
- OS operating system
- the memory 2130 may support a function for performing an emotional call according to an embodiment of the present invention.
- the memory 2130 may operate as a separate mass storage, and may include a control function for performing an operation.
- the memory is a computer readable medium.
- the memory may be a volatile memory unit, and for other implementations, the memory may be a nonvolatile memory unit.
- the memory may include, for example, a hard disk device, an optical disk device, or some other mass storage device.
- the user may reduce fatigue, and personalized call environment for the user by identifying emotional parties and setting emotional sound. It can also provide
- the method and apparatus for emotional communication using binaural sound according to the present invention are not limited to the configuration and method of the embodiments described as described above, but the embodiments may be modified in various ways. All or part of each of the embodiments may be selectively combined to be implemented.
Abstract
Description
Claims (20)
- 통화를 위한 인지적 부하를 경감하기 위한 감성통화방법에 있어서,감성사운드 세팅을 확인하는 단계; 및상기 감성사운드 세팅에 따라 상대방의 목소리 위치를 변경하고, 상대방 목소리를 변경된 목소리 위치에 상응하는 바이노럴 사운드로 제공하는 단계를 포함하는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성사운드 세팅은이어폰이 사용되면 활성화되고, 이어폰이 사용되지 않으면 비활성화되는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화를 수행하는 사용자의 얼굴에 대한 통화 단말의 상대위치를 확인하고, 상기 상대위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 감성통화방법.
- 청구항 3에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화 단말에 구비된 카메라를 기반으로 상기 사용자의 얼굴에 포함된 눈, 코, 입 및 귀 중 적어도 둘 이상의 위치 관계를 검출하고, 상기 위치 관계를 고려하여 검출된 얼굴 정면 방향을 기준으로 상기 상대위치에 상응하는 각도를 산출하는 단계를 포함하는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성사운드 세팅을 확인하는 단계는전화번호를 기반으로 상기 상대방에 대한 사전세팅 여부를 확인하고, 사전세팅이 되어있는 경우에 상기 사전세팅에 상응하게 상기 상대방의 목소리 위치를 확인하는 것을 특징으로 하는 감성통화방법.
- 청구항 5에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 사전세팅이 속삭임 모드일 경우, 사용자의 오른쪽 귀에 상응하는 위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 감성통화방법.
- 청구항 5에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 사전세팅이 잔소리 모드일 경우, 사용자의 머리 뒤쪽에 상응하는 위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화가 컨퍼런스 콜에 상응하는 경우, 상기 컨퍼런스 콜에 참여한 상대 인원 수를 고려하여 지정된 위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 감성통화방법.
- 청구항 8에 있어서,상기 지정된 위치는사용자의 시선을 기준으로 기설정된 범위의 각도를 상기 상대 인원수에 상응하게 분할한 값을 기반으로 설정되는 것을 특징으로 하는 감성통화방법.
- 청구항 8에 있어서,상기 감성통화방법은상기 상대 인원 수를 고려하여 상기 상대방의 목소리 위치를 지정할 수 있는 위치 지정 인터페이스를 제공하는 단계를 더 포함하는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성통화방법은감성통화 도중에 감성통화 종료 조건이 만족되는 경우, 상기 감성사운드 세팅을 비활성화시키는 단계; 및일반통화 도중에 감성통화 실행 조건이 만족되는 경우, 상기 감성사운드 세팅을 활성화시키는 단계를 더 포함하는 것을 특징으로 하는 감성통화방법.
- 청구항 11에 있어서,상기 감성통화 종료 조건은 상기 감성통화 도중에 이어폰 사용이 종료되는 경우 및 상기 감성통화 도중에 사용자 입력에 기반한 감성통화 종료 명령이 입력된 경우 중 적어도 하나에 상응할 때 만족되고,상기 감성통화 실행 조건은 상기 일반통화 도중에 이어폰 사용이 실행되는 경우 및 상기 일반통화 도중에 사용자 입력에 기반한 감성통화 실행 명령이 입력되는 경우 중 적어도 하나에 상응할 때 만족되는 것을 특징으로 하는 감성통화방법.
- 청구항 1에 있어서,상기 감성통화방법은음성 통화 및 영상 통화 중 적어도 하나의 통화 방식에 적용 가능한 것을 특징으로 하는 감성통화방법.
- 통화를 위한 인지적 부하를 경감하기 위해서,감성사운드 세팅을 확인하는 단계; 및상기 감성사운드 세팅에 따라 상대방의 목소리 위치를 변경하고, 상대방 목소리를 변경된 목소리 위치에 상응하는 바이노럴 사운드로 제공하는 단계를 실행시키는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 14에 있어서,상기 감성사운드 세팅은이어폰이 사용되면 활성화되고, 이어폰이 사용되지 않으면 비활성화되는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 14에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화를 수행하는 사용자의 얼굴에 대한 통화 단말의 상대위치를 확인하고, 상기 상대위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 16에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화 단말에 구비된 카메라를 기반으로 상기 사용자의 얼굴에 포함된 눈, 코, 입 및 귀 중 적어도 둘 이상의 위치 관계를 검출하고, 상기 위치 관계를 고려하여 검출된 얼굴 정면 방향을 기준으로 상기 상대위치에 상응하는 각도를 산출하는 단계를 포함하는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 14에 있어서,상기 감성사운드 세팅을 확인하는 단계는전화번호를 기반으로 상기 상대방에 대한 사전세팅 여부를 확인하고, 사전세팅이 되어있는 경우에 상기 사전세팅에 상응하게 상기 상대방의 목소리 위치를 확인하는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 18에 있어서,상기 사전세팅이 속삭임 모드일 경우, 사용자의 오른쪽 귀에 상응하는 위치를 상기 상대방의 목소리 위치로 확인하고,상기 사전세팅이 잔소리 모드일 경우, 사용자의 머리 뒤쪽에 상응하는 위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
- 청구항 14에 있어서,상기 감성사운드 세팅을 확인하는 단계는상기 통화가 컨퍼런스 콜에 상응하는 경우, 상기 컨퍼런스 콜에 참여한 상대 인원 수를 고려하여 지정된 위치를 상기 상대방의 목소리 위치로 확인하는 것을 특징으로 하는 컴퓨터로 판독 가능한 기록매체에 저장된 감성통화 어플리케이션.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0055558 | 2018-05-15 | ||
KR1020180055558A KR102036010B1 (ko) | 2018-05-15 | 2018-05-15 | 바이노럴 사운드를 이용한 감성 통화 방법 및 이를 위한 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019221506A1 true WO2019221506A1 (ko) | 2019-11-21 |
Family
ID=68420580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/005823 WO2019221506A1 (ko) | 2018-05-15 | 2019-05-15 | 바이노럴 사운드를 이용한 감성 통화 방법 및 이를 위한 장치 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102036010B1 (ko) |
WO (1) | WO2019221506A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006279492A (ja) * | 2005-03-29 | 2006-10-12 | Tsuken Denki Kogyo Kk | 電話会議システム |
JP2012529843A (ja) * | 2009-06-09 | 2012-11-22 | アンダーソン,ディーン・ロバート・ゲイリー | 補聴器の方向音響フィッティングのための方法と装置 |
KR101405646B1 (ko) * | 2010-06-29 | 2014-06-10 | 알까뗄 루슨트 | 휴대용 통신 디바이스 및 지향된 사운드 출력을 이용한 통신 가능화 |
US20160266865A1 (en) * | 2013-10-31 | 2016-09-15 | Dolby Laboratories Licensing Corporation | Binaural rendering for headphones using metadata processing |
KR20180038073A (ko) * | 2014-07-10 | 2018-04-13 | 와이덱스 에이/에스 | 적어도 하나의 보청기의 작동을 제어하기 위한 애플리케이션 소프트웨어를 갖는 개인 통신 디바이스 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100971700B1 (ko) | 2007-11-07 | 2010-07-22 | 한국전자통신연구원 | 공간큐 기반의 바이노럴 스테레오 합성 장치 및 그 방법과,그를 이용한 바이노럴 스테레오 복호화 장치 |
KR101599554B1 (ko) | 2009-03-23 | 2016-03-03 | 한국전자통신연구원 | Sac 부가정보를 이용한 3d 바이노럴 필터링 시스템 및 방법 |
-
2018
- 2018-05-15 KR KR1020180055558A patent/KR102036010B1/ko active IP Right Grant
-
2019
- 2019-05-15 WO PCT/KR2019/005823 patent/WO2019221506A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006279492A (ja) * | 2005-03-29 | 2006-10-12 | Tsuken Denki Kogyo Kk | 電話会議システム |
JP2012529843A (ja) * | 2009-06-09 | 2012-11-22 | アンダーソン,ディーン・ロバート・ゲイリー | 補聴器の方向音響フィッティングのための方法と装置 |
KR101405646B1 (ko) * | 2010-06-29 | 2014-06-10 | 알까뗄 루슨트 | 휴대용 통신 디바이스 및 지향된 사운드 출력을 이용한 통신 가능화 |
US20160266865A1 (en) * | 2013-10-31 | 2016-09-15 | Dolby Laboratories Licensing Corporation | Binaural rendering for headphones using metadata processing |
KR20180038073A (ko) * | 2014-07-10 | 2018-04-13 | 와이덱스 에이/에스 | 적어도 하나의 보청기의 작동을 제어하기 위한 애플리케이션 소프트웨어를 갖는 개인 통신 디바이스 |
Also Published As
Publication number | Publication date |
---|---|
KR102036010B1 (ko) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017204574A1 (en) | Wireless sound equipment | |
WO2020141824A2 (en) | Processing method of audio signal and electronic device supporting the same | |
US4008376A (en) | Loudspeaking teleconferencing circuit | |
WO2015147530A1 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2012102464A1 (ko) | 이어마이크로폰 및 이어마이크로폰용 전압 제어 장치 | |
WO2010087630A2 (en) | A method and an apparatus for decoding an audio signal | |
CN111464905A (zh) | 基于智能穿戴设备的听力增强方法、系统和穿戴设备 | |
WO2017188648A1 (ko) | 이어셋 및 그 제어 방법 | |
WO2016089180A1 (ko) | 바이노럴 렌더링을 위한 오디오 신호 처리 장치 및 방법 | |
WO2015147619A1 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2017043688A1 (ko) | 이어커낼 마이크가 내장된 블루투스 이어셋 및 이의 제어방법 | |
US9542957B2 (en) | Procedure and mechanism for controlling and using voice communication | |
WO2015199508A1 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2021060680A1 (en) | Methods and systems for recording mixed audio signal and reproducing directional audio | |
WO2022080612A1 (ko) | 휴대용 음향기기 | |
US20200245056A1 (en) | Earphone having separate microphones for binaural recordings and for telephoning | |
WO2019221506A1 (ko) | 바이노럴 사운드를 이용한 감성 통화 방법 및 이를 위한 장치 | |
WO2021010562A1 (en) | Electronic apparatus and controlling method thereof | |
EP0033744A1 (en) | Voice controlled switching system | |
WO2020096406A1 (ko) | 사운드 생성 방법 및 이를 수행하는 장치들 | |
WO2019199040A1 (ko) | 메타데이터를 이용하는 오디오 신호 처리 방법 및 장치 | |
WO2020101358A2 (ko) | 이어셋을 이용한 서비스 제공방법 | |
WO2023080698A1 (ko) | 향상된 brir에 기초한 입체 음향 생성 방법 및 이를 이용한 어플리케이션 | |
WO2020040541A1 (ko) | 전자장치, 그 제어방법 및 기록매체 | |
WO2022197151A1 (ko) | 외부 소리를 듣기 위한 전자 장치 및 전자 장치의 동작 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19803365 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19803365 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/05/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19803365 Country of ref document: EP Kind code of ref document: A1 |