WO2020147925A1 - Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé - Google Patents

Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé Download PDF

Info

Publication number
WO2020147925A1
WO2020147925A1 PCT/EP2019/050880 EP2019050880W WO2020147925A1 WO 2020147925 A1 WO2020147925 A1 WO 2020147925A1 EP 2019050880 W EP2019050880 W EP 2019050880W WO 2020147925 A1 WO2020147925 A1 WO 2020147925A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise source
noise
user
information
audio signal
Prior art date
Application number
PCT/EP2019/050880
Other languages
German (de)
English (en)
Inventor
Martin Richard NEUHÄUSSER
Gabor SCHULZ
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Priority to PCT/EP2019/050880 priority Critical patent/WO2020147925A1/fr
Publication of WO2020147925A1 publication Critical patent/WO2020147925A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis

Definitions

  • the present invention relates to a system for visualizing a noise source in a user's environment. Furthermore, the present invention relates to a method for visualizing a noise source in an environment of a user.
  • gas detectors for example, gas detectors, fire detectors and / or smoke detectors are known.
  • augmented reality devices are known from the prior art, with which additional information can be provided to the user.
  • a visual representation of information is often carried out, in which the surroundings or images or video data are overlaid with additional information.
  • augmented reality devices are known with which objects in the vicinity can also be detected. Attempts have been made here in which, using augmented reality glasses, space and object recognition is carried out in real time and corresponding tones are output for this purpose. This enables blind people, for example, to orient themselves in a building. This is described, for example, in the article "Augmented Reality Powers a Cognitive Prosthesis for the Blind” by Yang Liu, Noelle R. B. Stiles and Markus Meister, 2018.
  • a system serves to visualize a noise source in the surroundings of a user.
  • the system comprises a detection device for providing an audio signal which describes a noise emitted by the noise source.
  • the system comprises a computing device for recognizing the noise and the noise source based on the audio signal and for determining noise source information relating to the noise and / or describes the source of the noise.
  • the system comprises an augmented reality device that can be worn by the user for fading in the noise source information.
  • the system can be used to display sources of noise in the area to the user.
  • the system is particularly suitable for visualizing the noises for the user if the user cannot perceive the noises or only with difficulty. This is the case, for example, if there is a high level of noise in the area surrounding the user and / or the user is wearing appropriate hearing protection.
  • the system can be used in particular in an industrial environment, for example in production or assembly.
  • the system comprises the augmented reality device, which can be designed in particular as so-called augmented reality glasses. This augmented reality device can preferably be worn on the head by the user. This augmented reality device can be designed to display information in the user's field of vision.
  • the system also includes the detection device.
  • This acquisition device can comprise at least one microphone with which the noise of the noise source in the environment can be acquired.
  • the audio signal can be output, which writes the sound of the noise source be.
  • the system also includes the computing device by means of which the audio signal can be received by the detection device. This audio signal can be evaluated accordingly by means of the computing device in order to recognize the noise.
  • the computing device is designed in particular to also know the source of the noise from which the noise originates on the basis of the audio data. With the computing device, the noise source information can then be provided, which describes the noise and / or the noise source.
  • This noise source information can then be transmitted from the computer to the augmented reality device.
  • the augmented reality device can then do that
  • the noise information can be shown to users.
  • This Ge noise source information can thus show the user or the wearer of the augmented reality device in particular the noise source information in the field of vision.
  • the user can be provided with loud ambient noises and / or when wearing hearing protection information that there is a source of noise in the vicinity which causes a noise. Because this noise source information is faded into the field of view in particular, the user can be informed of the noise or the noise source regardless of his viewing direction and / or his point of view.
  • the computing device is preferably designed to determine the noise source information based on the audio signal based on artificial intelligence and / or by comparing the audio signal with stored reference audio signals.
  • the computing device can have an external computer network or a cloud, by means of which the noise source can be classified based on the audio signal.
  • the audio signal or parts thereof can be transmitted to this external computer network.
  • the audio signal can then be analyzed there using artificial intelligence.
  • a corresponding program can be run on the external computer network.
  • the audio signal can be compared with reference audio signals previously stored. These reference audio signals can be stored in a memory of the external computer network. For these reference audio signals, respective noise sources or classes of noise sources can be stored.
  • the computing device can have an embedded computing unit which is connected to the external computing network or the cloud for data transmission.
  • This embedded computing unit can, for example, be arranged in the augmented reality device.
  • the embedded computer unit can be part of the user's mobile device, for example a smartphone.
  • the audio signal can be transmitted from the detection device or the microphone to the embedded computer unit.
  • the audio signal or parts thereof can then be transmitted from the embedded computer unit to the external computer network.
  • the noise source information determined with the external computer network can then be transmitted from the external computer network to the embedded computer unit.
  • the noise source information can then be transmitted from the embedded computer unit to the augmented reality device and the noise source information can be shown to the user with this. Overall, a reliable detection of the noise and the noise source can thus be made possible.
  • the computing device is designed to output, as the noise source information, information which states that there is an unknown noise source in the vicinity if the computing device does not recognize the noise source. If, for example, the computing device cannot identify the associated noise or the associated noise source on the basis of the audio signal, this can be indicated to the user accordingly by means of the augmented reality device. For example, it can be indicated or displayed as the noise source information that there is an unknown noise source in the area. The user can thus be informed of the presence of an unknown noise source.
  • the computing device is preferably designed to classify the unknown noise source as a function of an input by the user.
  • the user can make a corresponding input or operator input, by means of which he can identify or classify the noise source in more detail.
  • This input can take place, for example, by means of a voice input, an operator input, an operator gesture or the like.
  • the computing device is designed to receive input from the user if the noise source has been incorrectly identified or classified. This information for operator input or for classification can then be stored accordingly by the computing device or by the external computer network.
  • a self-learning system can be provided, which can quickly improve in operation and adapt to the respective application situation.
  • the computing device is designed to determine a volume of the noise on the basis of the audio signal and to determine the noise information in such a way that it describes the volume.
  • the volume of the noise can be determined by means of the computing device.
  • the volume describes in particular a sound pressure level of the noise.
  • the noise source information can then be provided accordingly.
  • an additional indicator can be shown that describes the volume of the noise source. Additional information about the volume can thus be provided to the user.
  • the computing device is designed to use the audio data to determine a relative location between the user and the noise source and to determine the noise source information in such a way that it describes the relative location.
  • the detection device comprises at least one microphone.
  • the detection device has a plurality of microphones. These microphones can be arranged on the augmented reality device, for example. It can also be provided that the microphones are arranged in the surrounding area or in a room. On the basis of the respective data that are recorded with the microphones, the relative position between see the noise source and the respective microphones be determined. If the microphones are arranged separately from the augmented reality device, the relative position between the augmented reality device or the user and the microphones can also be determined.
  • the relative position between the user and the noise source can then be determined from this.
  • the direction from which the noise is sent can then be determined on the basis of the relative position.
  • the distance between the user and the noise source can be determined.
  • the noise source information can also describe the direction or the distance to the noise source.
  • a corresponding indicator can be shown to the user, which describes the direction from which the noise is coming or the position of the noise source. Additional information on the noise source can thus be provided to the user.
  • the user can be informed, for example, of a noise source that is outside of the
  • the computing device is designed to determine a relevance of the noise on the basis of the audio signal.
  • the noise source can first be identified or classified. In this way, it can be determined, for example, whether the noise from the noise source is highly relevant to the working environment. For example, it can be determined whether the noise is an alarm signal. Alternatively, it can be determined whether the noise is an irrelevant noise, for example a noise from a coffee machine or the like. This information as to whether the noise is relevant to the user can also be stored in a memory. It can also be provided here that the user specifies or adjusts the relevance of the noise sources by means of an appropriate input. It can also be provided that, by inputting the user, the noise source information is not output at a predetermined noise source.
  • the computing device can be designed to determine a current operating state of the noise source on the basis of the audio signal.
  • the noise source information can be determined so that it describes the operating state. For example, it can be recognized from the audio signal that a machine that emits the noise squeaks. This can be displayed to the user, for example, as text using the augmented reality device. Provision can also be made for an instruction to be determined on the basis of the recognized operating state and to be issued to the user. For example, the user can be instructed to oil the squeaky machine. In this way, the operating status of machines or other devices in the area can be recognized and, depending on the operating status, appropriate actions can be taken.
  • the system is designed to check for the presence of a predetermined noise in the environment and to output information to the user if the predetermined noise is not present. If a noise that is normally present is not present in a work environment or work scenario, this can be displayed to the user. This sound can come from a device or machine that is in continuous operation. If this noise is no longer present, this can indicate a failure or defect of this machine. This can be displayed to the user accordingly.
  • the system is designed to recognize the noise source in the environment and the augmented reality device is designed to emphasize the recognized noise source by providing the noise source information.
  • the augmented reality device can have a corresponding camera and / or sensors, by means of which objects in the environment can be recognized.
  • Corresponding image data can be provided with the camera, the objects being able to be recognized with the aid of a corresponding object recognition algorithm. If, based on the data of the microphones, there is also a directional information or the relative position between the user and the noise source, the noise source in the surroundings or the field of vision can be recognized.
  • the data of the classification of the noise source can be used here.
  • the recognized noise source can then be highlighted accordingly by the noise source information. For this purpose, for example, a corresponding symbol or a rectangle that surrounds the noise source can be shown.
  • the user can be made aware of the noise source in an intuitive manner.
  • the previously described object recognition can also be used.
  • the object recognition which is carried out, for example, with the camera, and possible information regarding the relative position between the noise source and the user, the unknown noise source can be identified in the field of view and displayed to the user.
  • the user can recognize the unknown noise source and classify it by making an appropriate entry.
  • the system is designed to recognize a lip movement of a person and to display voice information corresponding to the lip movement.
  • the lip movement of a person can also be detected with a corresponding camera or other optical sensors.
  • a corresponding program can be run on the computing device, which can output appropriate speech information on the basis of the lip movement.
  • This language information describes the language output by the person during lip movement.
  • This language information or a corresponding text, describing the voice information can be shown to the user by means of the augmented reality device. This enables the user to talk to the person even when there is loud ambient noise and / or when wearing hearing protection.
  • a method according to the invention serves to visualize a noise source in the surroundings of a user.
  • the method comprises providing an audio signal, which describes a noise emitted by the noise source, by means of a detection device.
  • the method includes recognizing the noise and the noise source on the basis of the audio signal and determining a noise source information which describes the noise and / or the noise source by means of a computing device.
  • the method comprises fading in the noise source information by means of a user-portable
  • FIG. 1 shows a schematic representation of a system for visualizing a noise source in the surroundings of a user
  • Augmented reality device a noise source information is displayed
  • noise source information is displayed according to a further embodiment.
  • noise source information is shown according to another embodiment.
  • the system 1 shows a system 1 for visualizing a noise source 2 in a schematic representation.
  • the system 1 comprises a detection device 3, by means of which an audio signal can be provided.
  • the acquisition device 3 comprises at least one microphone 4.
  • the acquisition device 3 can be used to provide the audio signal which describes a noise of the noise source 2 in an environment 5 of a user 6.
  • the system 1 comprises a computing device 7, which is used to recognize the noise and the noise source 2 on the basis of the audio signal.
  • the computing device 7 comprises an embedded computing unit 8.
  • the embedded computing unit 8 is part of a mobile terminal 9 of the user 6.
  • the mobile terminal 9 can be formed, for example, by a smartphone or the like.
  • the audio signal provided by the detection device 3 can be received by means of the embedded computer unit 8.
  • Computing device 7 comprises an external computer network 10 or a cloud.
  • the audio signal received by the embedded computer unit 8 can be transmitted from the embedded computer unit 8 to the external computer network 10.
  • the audio signal can be compared with previously determined reference audio signals.
  • These reference audio signals are assigned to predetermined noise sources 2 or classes of noise sources.
  • a corresponding program can be executed on the external computer network 10, by means of which the sound source 2 is recognized by artificial intelligence based on the audio signal.
  • a noise source information 15 can then be provided which describes the noise source 2 and / or the noise emitted by it.
  • This noise source information 15 can then be transmitted from the computing device 7 to an augmented reality device 11.
  • the user 6 can wear this augmented reality device 11 on his head 12.
  • the augmented reality device 11 can in particular be augmented reality glasses or data glasses.
  • objects 14 include a table, a drill and a coffee maker.
  • the augmented reality device 11 can furthermore have a camera and / or a corresponding optical sensor with which the objects 14 can be detected in the field of view 13 or in the environment 5. With the aid of a corresponding object detection algorithm, the objects 14 can then be recognized or classified.
  • the detection device 3 has a plurality of microphones 4, so that the direction from which the noise reaches the user 6 or the relative position between the user 6 and the noise source 2 can also be recognized.
  • the noise source information 15 is faded in.
  • the drilling machine was recognized as the noise source 2 on the basis of the audio signal.
  • the Ge noise source 2 or the drill was identified in the viewing area 13.
  • the noise source information 15 comprises a frame 16 lying before it, by means of which the noise source 2 is emphasized.
  • a volume of the noise is emitted by the computing device 7 based on the audio signal, which is emitted by the Ge noise source 2.
  • the noise source information 15 also includes a volume indicator 17, which indicates the volume of the noise.
  • a current operating state of the noise source 2 is determined by means of the computing device 7 based on the audio signal or the noise of the noise source 2. In the present example, it is recognized from the audio signal that the drilling machine or the noise source 2 squeaks. Furthermore, an instruction for action is determined by the computing device 7 as a function of the recognized operating state and output to the user 6. The current operating state and / or the handling instruction can be reproduced in a text block 18 of the noise source information 15, which is not shown here.
  • the noise source information 15 includes mation 15 a corresponding volume indicator 17.
  • the noise source information includes mation 15 a corresponding volume indicator 17.
  • the noise source 2 could not be identified by means of the computing device 7.
  • the user 6 can highlight the noise source 2 by the frame 16.
  • the volume of the sound of the noise source 2 is indicated by the volume indicator 17.
  • the user 6 can make an appropriate entry in order to classify the noise source 2. This can be done, for example, in the form of a corresponding voice input.
  • the user 6 can, for example, indicate that it is a tea kettle that emits a corresponding whistling sound.
  • This information can then be stored in a memory of the computing device 7 or the external computer network 10. In this way, the functionality of the system 1 can be continuously expanded or improved during operation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système (1) de visualisation d'une source de bruit (2) dans un environnement (5) d'un utilisateur (6). Le système comprend : un dispositif d'acquisition (3) destiné à fournir un signal audio, lequel décrit un bruit émis par la source de bruit (2), un dispositif de calcul (7) destiné à identifier le bruit et la source de bruit (2) sur la base du signal audio ainsi qu'à déterminer une information (15) de source de bruit, laquelle décrit le bruit et/ou la source de bruit (2), et un dispositif de réalité augmentée (11) pouvant être porté par l'utilisateur et destiné à afficher l'information (15) de source de bruit.
PCT/EP2019/050880 2019-01-15 2019-01-15 Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé WO2020147925A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/050880 WO2020147925A1 (fr) 2019-01-15 2019-01-15 Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/050880 WO2020147925A1 (fr) 2019-01-15 2019-01-15 Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé

Publications (1)

Publication Number Publication Date
WO2020147925A1 true WO2020147925A1 (fr) 2020-07-23

Family

ID=65236988

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/050880 WO2020147925A1 (fr) 2019-01-15 2019-01-15 Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé

Country Status (1)

Country Link
WO (1) WO2020147925A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359695A (en) * 1984-01-30 1994-10-25 Canon Kabushiki Kaisha Speech perception apparatus
US20110071830A1 (en) * 2009-09-22 2011-03-24 Hyundai Motor Company Combined lip reading and voice recognition multimodal interface system
US20140172432A1 (en) * 2012-12-18 2014-06-19 Seiko Epson Corporation Display device, head-mount type display device, method of controlling display device, and method of controlling head-mount type display device
US20150036856A1 (en) * 2013-07-31 2015-02-05 Starkey Laboratories, Inc. Integration of hearing aids with smart glasses to improve intelligibility in noise
EP3220372A1 (fr) * 2014-11-12 2017-09-20 Fujitsu Limited Dispositif vestimentaire, procédé et programme de commande d'affichage
EP3343957A1 (fr) * 2016-12-30 2018-07-04 Nokia Technologies Oy Contenu multimédia
DE112016005648T5 (de) * 2015-12-11 2018-08-30 Sony Corporation Datenverarbeitungsvorrichtung, datenverarbeitungsverfahren und programm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359695A (en) * 1984-01-30 1994-10-25 Canon Kabushiki Kaisha Speech perception apparatus
US20110071830A1 (en) * 2009-09-22 2011-03-24 Hyundai Motor Company Combined lip reading and voice recognition multimodal interface system
US20140172432A1 (en) * 2012-12-18 2014-06-19 Seiko Epson Corporation Display device, head-mount type display device, method of controlling display device, and method of controlling head-mount type display device
US20150036856A1 (en) * 2013-07-31 2015-02-05 Starkey Laboratories, Inc. Integration of hearing aids with smart glasses to improve intelligibility in noise
EP3220372A1 (fr) * 2014-11-12 2017-09-20 Fujitsu Limited Dispositif vestimentaire, procédé et programme de commande d'affichage
DE112016005648T5 (de) * 2015-12-11 2018-08-30 Sony Corporation Datenverarbeitungsvorrichtung, datenverarbeitungsverfahren und programm
EP3343957A1 (fr) * 2016-12-30 2018-07-04 Nokia Technologies Oy Contenu multimédia

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG LIU; NOELLE R. B.: "Stiles und Markus Meister", 2018, article "Augmented Reality Powers a Cognitive Prosthesis for the Blind"

Similar Documents

Publication Publication Date Title
DE102010038341B4 (de) Videoüberwachungssystem sowie Verfahren zur Konfiguration eines Videoüberwachungssystems
EP3394708B1 (fr) Procédé de fonctionnement d'un système de réalité virtuelle et système de réalité virtuelle associé
EP2220627B1 (fr) Système de surveillance avec module de détection d'état, procédé d'autosurveillance d'un observateur, et programme informatique
WO2001056017A1 (fr) Systeme et procede de traitement vocal focalise sur la vue
EP3014592A1 (fr) Procédé et dispositif de commande à distance d'une fonction d'un véhicule
DE102004000043A1 (de) Verfahren zur selektiven Aufnahme eines Schallsignals
DE102013207369A1 (de) Verfahren und Vorrichtung zur Absicherung einer vollautomatischen Bewegung eines Fahrzeuges
EP3435140A1 (fr) Machine de travail pourvue de dispositif d'affichage
WO2009106421A1 (fr) Procédé et dispositif pour déterminer une fuite dans un composant d'installation et/ou un état d'un composant d'installation
WO2020173983A1 (fr) Procédé et dispositif de surveillance d'une étape de processus industrielle
WO2020147925A1 (fr) Système de visualisation d'une source de bruit dans un environnement d'un utilisateur ainsi que procédé
DE102005008316B4 (de) Hörvorrichtung und Verfahren zum Überwachen des Hörvermögens eines Minderhörenden
DE102012207170A1 (de) Multifunktionale Sensoreinheit und Verfahren zur Justierung der Einheit
DE102018218728A1 (de) System zur Steuerung einer Maschine mit "Wearable Display"
WO2014187904A1 (fr) Dispositif d'actionnement de commutateur, appareil mobile et procédé d'actionnement d'un commutateur par la présence d'une partie émettant de la chaleur
DE102016011016A1 (de) Verfahren zum Betrieb eines Assistenzsystems
DE102015006613A1 (de) Bediensystem und Verfahren zum Betreiben eines Bediensystems für ein Kraftfahrzeug
EP3649538A1 (fr) Ensemble et procédé de communication faisant intervenir deux appareils de sortie visuelle
EP4163742B1 (fr) Procédé de vérification d'un agencement d'accès à l'aide d'un terminal mobile et système correspondant
DE102017130693A1 (de) Industrierobotersteuervorrichtung
DE102021006389B3 (de) Verfahren zur automatischen Erstellung von anlagenspezifischen Messprofilen für ein Isolationsüberwachungssystem
DE102011017305A1 (de) Bedien- und Beobachtungssystem für technische Anlagen
DE102014205073A1 (de) Bewegungsmelder und Verfahren hierfür
DE102021131028A1 (de) Verbesserte Mensch-Roboter-Interaktion durch erweiterte Realität
DE102019209153A1 (de) Verfahren und Vorrichtung zum sicheren Klassifizieren und/oder Segmentieren von Bildern

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19702005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19702005

Country of ref document: EP

Kind code of ref document: A1