WO2022107174A1 - Method of computer-aided machine repair - Google Patents

Method of computer-aided machine repair Download PDF

Info

Publication number
WO2022107174A1
WO2022107174A1 PCT/IT2020/000077 IT2020000077W WO2022107174A1 WO 2022107174 A1 WO2022107174 A1 WO 2022107174A1 IT 2020000077 W IT2020000077 W IT 2020000077W WO 2022107174 A1 WO2022107174 A1 WO 2022107174A1
Authority
WO
WIPO (PCT)
Prior art keywords
machine
operator
computer server
voice recognition
information
Prior art date
Application number
PCT/IT2020/000077
Other languages
French (fr)
Inventor
Gabriele MARCHINA
Original Assignee
Marchina Gabriele
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marchina Gabriele filed Critical Marchina Gabriele
Priority to PCT/IT2020/000077 priority Critical patent/WO2022107174A1/en
Publication of WO2022107174A1 publication Critical patent/WO2022107174A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance

Definitions

  • the present invention relates to a method of computer-aided machine repair. More particularly, the present invention relates to a method of computer-aided machine repair, in which an operator provides machine malfunction information to a computer server using a voice recognition system and augmented reality equipment, and receives troubleshooting information via one or both of the voice recognition system and the augmented reality equipment.
  • Industrial machines are subject to occasional malfunctions and breakdowns. When an industrial machine is not operating properly, or has stopped operating, troubleshooting is required, but the attending operator may be unable or may not be allowed to perform the required maintenance independently.
  • a computer-based maintenance and service management system for injection molding machines is disclosed in document WO 2004/070460 to Nestal-Machinen AG.
  • the maintenance system disclosed by WO ‘460 includes creating maintenance/service modules defined as functional units, which provide targeted, on-side support and maintenance forecasts based on the maintenance/service modules. When a malfunction occurs, the operator can consult troubleshooting information stored locally or remotely to perform the necessary repair work.
  • a method of computer-aided machine repair according to the invention is performed using a computer server that has a database stored within the computer server or readily accessible externally, for example when the database is stored in the cloud.
  • database includes repair information on a specific type and model of a machine, or on a class of machines, or on different types of machines.
  • a machine operator is provided with augmented reality equipment connected to the computer server, which enables the operator to input information into the computer server that is in the form of imagery captured or anyway transmitted with the augmented reality equipment and that illustrates the machine malfunction.
  • imagery may be in the form of static images or of a video.
  • the machine operator is further connected to the computer server through a voice recognition system that inputs machine malfunction information into the computer server using a vocal input by the operator.
  • the computer server determines the operators’ intent and elaborates the machine malfunction information that has been provided by the voice recognition system and by the augmented reality equipment in order to develop troubleshooting information on the machine.
  • Such troubleshooting information is provided to the operator by the computer server by means of one or both of the voice recognition system and the augmented reality equipment.
  • the voice recognition system is powered by artificial intelligence and may be a natural language understanding (NLU) voice recognition system that is trained for a specific domain to understand a vocal input related to a specific context and that is self-training to improve understanding of vocal inputs over time.
  • NLU natural language understanding
  • the voice recognition system may also include a text to speech system configured to transform text information received from the operator into speech, for example, when it is difficult for the machine operator to speak a predetermined language.
  • the voice recognition system may be configured to recognize voice commands received from a plurality of users through a world wide voice custom care expert (WWCCE) system.
  • WWCCE world wide voice custom care expert
  • the voice recognition system includes a real-time translation system, which enables the voice recognition system to receive and provide voice messages in a desired language.
  • the voice recognition system may also be configured to provide a vocal response to the operator’s input that requests additional information from the operator and provides illustrative examples to the operator, so that the operator’s query may be understood by the computer server more clearly.
  • the augmented reality equipment may include smart glasses having a camera connected wirelessly to the computer server.
  • the imagery provided to the computer server using the augmented reality equipment consists typically of images of portions of the machine, such as machine parts, warning lights, error codes, or machine displays.
  • the augmented reality equipment may provide imagery of a machine part of interest where repair is required or which needs to be accessed to perform repair operations, and may further provide identification information on such machine part of interest and receive the related troubleshooting information, which may be displayed as text, as a two-dimensional image, in holographic form, or in video form.
  • the augmented reality equipment may also be connected, for example, tethered, to an electronic device that is adapted to capture images and display troubleshooting information received from the computer server, such as a smartphone, a tablet, or a personal computer.
  • the machine operator may be further connected to the computer server via a dashboard that is in communication with one or both of the voice recognition system and the augmented reality equipment, so as to enable the operator to provide some or all of the machine information system via the dashboard.
  • a method of computer-aided machine repair according to the invention may further include the step of having the computer server record and store, either internally or externally, a maintenance record of the machine, and to provide the operator with maintenance guidelines for the machine based on machine type and history.
  • FIG. 1 illustrates the hardware components utilized in a method according to the invention and their respective interactions.
  • FIG. 2 illustrates the use of augmented reality equipment in a method according to the invention.
  • FIG. 3 summarized a method according to the invention in diagram form.
  • FIG. 1 illustrates the basic elements for performing a method according to the invention.
  • a computer server 10 is connected to a database 11, which may be stored in computer server 10, or, as shown in FIG. 1, may be stored in cloud 12.
  • Database 11 includes repair information related to a machine 13 and may be organized for rapid search and retrieval.
  • database 1 1 may include repair information on a specific machine; or on a group of machines, for example, on milling machines that are part of a product line of milling machines produced by a specific manufacturer; or on a specific type of machine, for example, milling machines produced by different manufactures; or on different types of machines.
  • Computer server 10 is further connected to a voice recognition system 14, which is configured to recognize human voices, for example, the voice of an operator 15, and convert that voice into information that can be elaborated by computer server 10.
  • a voice recognition system alternatively referred to as a speech recognition system, may be a computer software program or a hardware device, which has the ability to decode the human voice and which may be used to operate a device, perform commands, or write without having to use a keyboard, a mouse, or to press any buttons.
  • a voice recognition system used in a method according to the invention is a natural language understanding (NLU) system, which may be locally based or cloud based.
  • the NLU system may be trained for a specific domain, in particular, to understand vocal inputs related to a specific context, and may be self-training to improve recognition of vocal input over time.
  • the NLU system may be configured and trained to determine operator intent as expressed by the vocal input of operator 15; determine whether additional information must be requested from operator 15, for example, the type of machine when that piece of information is not provided by the operator or the serial number of the machine; and ask questions and provide one or more examples to operator 15 in order to better understand the intent of operator 15.
  • voice recognition system 14 may be provided as software stored in computer server 10, with which operator 15 can communicate via a personal computer, a smartphone, or other external device.
  • voice recognition system 14 is configured to recognize the voices of a multiplicity 7 of users having different accents, or may be configured to recognize users speaking different languages, for example, for centralized maintenance support for machines installed in a plurality of countries. This may be achieved using a world wide voice custom care expert (WWCCE) system.
  • WWCCE world wide voice custom care expert
  • voice recognition system 14 may include a real-time translation system, which enables voice recognition system 14 to receive and provide voice messages in a specific language.
  • voice recognition systems that may be employed are systems powered by artificial intelligence, such as Google Assistant or Amazon Alexa.
  • voice recognition system 14 may be appropriately configured for the desired application using tools such as Google Action or Amazon Alexa Skill.
  • the software powering voice recognition system 14 may be stored in a dedicated piece of hardware that is part of or connected to computer server 10, or may be cloud-based.
  • Operator 15 is also provided with augmented reality (AR) equipment 16, which is configured to also communicate with computer server 10.
  • AR equipment 16 is smart glasses, which may be provided in different forms, for example, as glasses 17 having a camera 18 configured to communicate wirelessly with computer server 10.
  • smart glasses 17 may be configured to simply provide imagery to computer server 10 or to also add a layer of interactive digital information on top of what is already seen in the physical world, for a purpose that will be discussed in greater detail later.
  • Smart glasses 17 may also be tethered to an electronic device such as a smartphone to provide a display on that electronic device or receive an input through that electronic device.
  • the added image layer or the display on the tethered electronic device may be holographic, as will also be discussed later.
  • AR equipment examples include Google Glass, Microsoft Hololens, and Toshiba dynaEdge, to mention but a few.
  • operator 15 connects to computer server 10 both via voice, by providing a vocal input to voice recognition system 14, and via AR equipment 16, by inputting imagery into computer server 10 that illustrates the machine malfunction. That imagery may have been prerecorded, by capturing images representative of the machine malfunction, storing those images, and eventually transmitting those images to computer server 10; or may be real life images captured while operator 15 is connected to computer server 10. Further, the imagery provided to computer server 10 may be in the form of still images or in video form.
  • the transmission of voice and image information by operator 15 to computer server 10 may occur in different manners: by having operator 15 capture and store machine images in advance, and download those images into computer server 10 when the voice input is also provided; or, otherwise, by having operator 15 capture images representative of the machine malfunction in real time and, at the same time, provide the voice input to voice recognition system 14 by using a portable device, such as a portable Amazon Alexa device or a smartphone connected to voice recognition system 14.
  • a portable device such as a portable Amazon Alexa device or a smartphone connected to voice recognition system 14.
  • computer server 10 Upon receiving the voice-based and image-based inputs from user 15, or immediately thereafter, computer server 10 elaborates the information provided by user 15 to determine the intent of user 15, and to develop troubleshooting information based on the received input. As described previously, computer server 10 may interact with operator 15 to better clarify operator 15’s intent by asking clarifying questions and by providing targeted examples.
  • the troubleshooting information provided by computer server 10 may be developed by having computer server 10 access database 11 and having computer server 10 retrieve troubleshooting information from database 1 1 , for example using standardized queries.
  • the machine malfunction may be related to the improper deposition of conductive ink by an ink deposition machine for the production of a printed electronics, where well defined, straight print edges are required and are instead not being produced.
  • operator 15 may provide a voice input to voice recognition system 14 that explains the problem of improper ink deposition and provides specific, detailed information on the problem.
  • operator 15 may provide a text input (for example, when it may be problematic for operator 15 to communicate in a language recognized by voice recognition system 14) using a text to speech system, such as Google WaveNet or Amazon NTTS (Neural Text To Speech).
  • a text to speech system such as Google WaveNet or Amazon NTTS (Neural Text To Speech).
  • operator 15 may also provide computer server 10 with a visual input using real time or prerecorded images that illustrate the problem, for example, the print edges of unacceptable quality and close-up images of the printing head.
  • the visual input may also include other images, such as images of other machine parts, warning lights appearing on the machine that indicate a machine malfunction in being, error codes provided by the machine or by ad hoc equipment, machine displays, or even the potential development of a malfunction when tolerance ranges are being approached or exceeded.
  • AR equipment 16 is also connected to a dashboard 19, so that the interaction between AR equipment 16 and computer server 16 may occur via dashboard 19.
  • computer server 15 queries data base 1 1.
  • Such query may be in the form of a standard consultation seeking solutions to the identified problem through a word or word group search, in a manner similar to consulting a troubleshooting manual where standardized answers are provided for predetermined problems.
  • computer server 15 may also cross-reference among different portions of database 11 using artificial intelligence to develop solutions more closely related to the problem presented by operator 15 than the standardized solutions provided in the manner of chapters of a troubleshooting manual.
  • computer server 10 may analyze the received images and cross-reference among possible solutions listed in database 11 to determine whether the problem of poorly defined, uneven print edges is due to an incorrect movement of the printing head, to a clogging of the printing head that causes an uneven ink output, or to an improper density of the ink that causes the ink to spread undesirably, so as to provide a response to operator 15 which is more specifically tailored to the situation at hand.
  • the response of computer server 10 to operator 15 may be provided in various manners.
  • computer server 15 provides a vocal response, by which computer server 10 provides troubleshooting information or instructions to operator 15 via artificial voice, using voice recognition system 14.
  • voice recognition system 14 Even in this case, the artificial voice may be provided with a text to speech system, such as Google WaveNet or Amazon NTTS.
  • the response may be provided by computer server 10 through AR equipment 16, for example, by causing digital imagery to appear or to be overlapped as a layer over the real image seen in AR equipment 16.
  • the digital imagery may include a display of a part of interest of machine 13, and provide identification of the type of machine part and troubleshooting information on that machine part.
  • the digital imagery may be in two- dimensional form, in the form of a digital image, possibly with written or other information overlapped over the digital image; in holographic form; or in video form.
  • the digital imagery may display the ink deposition head, identify the ink deposition head by part number, and show repair or fine-tuning information by pointing to specific portions of the ink deposition head or by providing a sequence of images.
  • the provided images may display a state of the ink deposition head that the operator should achieve or show how the ink deposition head should be cleaned using a series of images.
  • the troubleshooting information in visual form may be displayed on a device connected to AR equipment 16, for example, a smartphone, a tablet, a personal computer, or on dashboard 19.
  • computer server 10 may provide the troubleshooting information both via voice recognition system 14 and AR equipment 16.
  • a method according to the invention may include the additional step of a having computer server 10 record and store, either internally or externally (for example, in cloud 12) a maintenance record of machine 13, and to provide operator 15 with a maintenance schedule and maintenance guidelines for machine 15.
  • the maintenance guidelines may include observing and testing certain parts of machine 15 which have a history of problems or which computer server 15 may forecast to be prone to developing problems, on the basis both of the history of machine 15 or of the histories of other machines of the same kind stored in database 11.
  • computer server 10 may update the records and maintenance schedules of machine 13, and alert the operators of all other machines of the same kind in order to prevent the same problem from occurring in those other machines.
  • Computer server 10 may also update the records and maintenance schedules of all those other machines.

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method of computer-aided machine repair includes causing an operator to provide machine malfunction information to a computer server using both a voice recognition system that receives a vocal description of the malfunction and augmented reality equipment that provides imagery depicting the machine malfunction, and further causing the operator to receive troubleshooting information from the computer server via one or both of the voice recognition system and the augmented reality equipment.

Description

METHOD OF COMPUTER-AIDED MACHINE REPAIR
FIELD OF THE INVENTION
[0001] The present invention relates to a method of computer-aided machine repair. More particularly, the present invention relates to a method of computer-aided machine repair, in which an operator provides machine malfunction information to a computer server using a voice recognition system and augmented reality equipment, and receives troubleshooting information via one or both of the voice recognition system and the augmented reality equipment.
BACKGROUND OF THE INVENTION
[0002] Industrial machines are subject to occasional malfunctions and breakdowns. When an industrial machine is not operating properly, or has stopped operating, troubleshooting is required, but the attending operator may be unable or may not be allowed to perform the required maintenance independently.
[0003] Maintenance personnel, however, may not be on staff in certain plants, or may not be available during certain time intervals, for example, during the night and on weekends. An industrial stoppage is highly undesirable due to loss of production and, consequently, loss of revenues, and because the output of the malfunctioning machine may provide the input to a downstream machine, which causes the stoppage to extend to the downstream machine, and also to an upstream machine, because the output of the upstream machine cannot be used.
[0004] Even when maintenance personnel is available, the repair of the machine may require a level of knowledge beyond that of the available personnel, or the diagnostic time may be extended, which increases machine downtime.
[0005] Therefore, there is a need for a repair system and method that can be used by a machine operator at any time and that can provide troubleshooting information that can be applied without the presence of specialized personnel.
[0006] There is also a need for a maintenance system, which can be accessed quickly by maintenance personnel and with which the maintenance personnel can interact rapidly, receiving accurate troubleshooting information. [0007] A computer-based maintenance and service management system for injection molding machines is disclosed in document WO 2004/070460 to Nestal-Machinen AG. The maintenance system disclosed by WO ‘460 includes creating maintenance/service modules defined as functional units, which provide targeted, on-side support and maintenance forecasts based on the maintenance/service modules. When a malfunction occurs, the operator can consult troubleshooting information stored locally or remotely to perform the necessary repair work.
[0008] The system disclosed in WO ‘460, however, is based on accessing predefined troubleshooting information, without the ability of having the operator define the problem, and of having the machine understand the problem, with a high degree of accuracy by providing both vocal and visual descriptions of the problem. Further, in a system as described in WO ‘460, the operator cannot receive the necessary repair information with verbal and visual explanations tailored to the problem observed at a specific moment in time.
SUMMARY OF THE INVENTION
[0009] A method of computer-aided machine repair according to the invention is performed using a computer server that has a database stored within the computer server or readily accessible externally, for example when the database is stored in the cloud. Such database includes repair information on a specific type and model of a machine, or on a class of machines, or on different types of machines.
[0010] A machine operator is provided with augmented reality equipment connected to the computer server, which enables the operator to input information into the computer server that is in the form of imagery captured or anyway transmitted with the augmented reality equipment and that illustrates the machine malfunction. Such imagery may be in the form of static images or of a video.
[0011] The machine operator is further connected to the computer server through a voice recognition system that inputs machine malfunction information into the computer server using a vocal input by the operator.
[0012] Based on the received imagery and vocal inputs, the computer server determines the operators’ intent and elaborates the machine malfunction information that has been provided by the voice recognition system and by the augmented reality equipment in order to develop troubleshooting information on the machine. Such troubleshooting information is provided to the operator by the computer server by means of one or both of the voice recognition system and the augmented reality equipment.
[0013] In one embodiment of the invention, the voice recognition system is powered by artificial intelligence and may be a natural language understanding (NLU) voice recognition system that is trained for a specific domain to understand a vocal input related to a specific context and that is self-training to improve understanding of vocal inputs over time.
[0014] The voice recognition system may also include a text to speech system configured to transform text information received from the operator into speech, for example, when it is difficult for the machine operator to speak a predetermined language. In one embodiment, the voice recognition system may be configured to recognize voice commands received from a plurality of users through a world wide voice custom care expert (WWCCE) system.
[0015] In one embodiment, the voice recognition system includes a real-time translation system, which enables the voice recognition system to receive and provide voice messages in a desired language.
[0016] The voice recognition system may also be configured to provide a vocal response to the operator’s input that requests additional information from the operator and provides illustrative examples to the operator, so that the operator’s query may be understood by the computer server more clearly.
[0017] The augmented reality equipment may include smart glasses having a camera connected wirelessly to the computer server.
[0018] The imagery provided to the computer server using the augmented reality equipment consists typically of images of portions of the machine, such as machine parts, warning lights, error codes, or machine displays. In particular, the augmented reality equipment may provide imagery of a machine part of interest where repair is required or which needs to be accessed to perform repair operations, and may further provide identification information on such machine part of interest and receive the related troubleshooting information, which may be displayed as text, as a two-dimensional image, in holographic form, or in video form.
[0019] The augmented reality equipment may also be connected, for example, tethered, to an electronic device that is adapted to capture images and display troubleshooting information received from the computer server, such as a smartphone, a tablet, or a personal computer. [0020] The machine operator may be further connected to the computer server via a dashboard that is in communication with one or both of the voice recognition system and the augmented reality equipment, so as to enable the operator to provide some or all of the machine information system via the dashboard.
[0021] A method of computer-aided machine repair according to the invention may further include the step of having the computer server record and store, either internally or externally, a maintenance record of the machine, and to provide the operator with maintenance guidelines for the machine based on machine type and history.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The drawings constitute a part of this specification and include exemplary embodiments of the invention, which may be embodied in various forms. It is to be understood that in some instances various aspects of the invention may be shown exaggerated or enlarged to facilitate an understanding of the invention.
[0023] FIG. 1 illustrates the hardware components utilized in a method according to the invention and their respective interactions.
[0024] FIG. 2 illustrates the use of augmented reality equipment in a method according to the invention.
[0025] FIG. 3 summarized a method according to the invention in diagram form.
DETAILED DESCRIPTION OF EMBODIMENT OF THE INVENTION
[0026] Detailed descriptions of embodiments of the invention are provided herein. It is to be understood, however, that the present invention may be embodied in various forms. Therefore, the specific details disclosed herein are not to be interpreted as limiting, but rather as a representative basis for teaching one skilled in the art how to employ the present invention in virtually any detailed system, structure, or manner.
[0027] FIG. 1 illustrates the basic elements for performing a method according to the invention. A computer server 10 is connected to a database 11, which may be stored in computer server 10, or, as shown in FIG. 1, may be stored in cloud 12. Database 11 includes repair information related to a machine 13 and may be organized for rapid search and retrieval. [0028] In different embodiments of the invention, database 1 1 may include repair information on a specific machine; or on a group of machines, for example, on milling machines that are part of a product line of milling machines produced by a specific manufacturer; or on a specific type of machine, for example, milling machines produced by different manufactures; or on different types of machines.
[0029] A person of skill in the art will appreciate that information on numerous models and types of machines may be included in database 1 1, therefore, the above example of a milling machine should not be understood as limiting but merely as exemplary of a range of possibilities. [0030] Computer server 10 is further connected to a voice recognition system 14, which is configured to recognize human voices, for example, the voice of an operator 15, and convert that voice into information that can be elaborated by computer server 10. A voice recognition system, alternatively referred to as a speech recognition system, may be a computer software program or a hardware device, which has the ability to decode the human voice and which may be used to operate a device, perform commands, or write without having to use a keyboard, a mouse, or to press any buttons.
[0031] In one embodiment, a voice recognition system used in a method according to the invention is a natural language understanding (NLU) system, which may be locally based or cloud based. The NLU system may be trained for a specific domain, in particular, to understand vocal inputs related to a specific context, and may be self-training to improve recognition of vocal input over time.
[0032] In one embodiment, the NLU system may be configured and trained to determine operator intent as expressed by the vocal input of operator 15; determine whether additional information must be requested from operator 15, for example, the type of machine when that piece of information is not provided by the operator or the serial number of the machine; and ask questions and provide one or more examples to operator 15 in order to better understand the intent of operator 15.
[0033] In different embodiments, different interfaces may be employed by operator 15 to communicate with voice recognition system 12. For example, voice recognition system 14 may be provided as software stored in computer server 10, with which operator 15 can communicate via a personal computer, a smartphone, or other external device. [0034] In one embodiment of the invention, voice recognition system 14 is configured to recognize the voices of a multiplicity7 of users having different accents, or may be configured to recognize users speaking different languages, for example, for centralized maintenance support for machines installed in a plurality of countries. This may be achieved using a world wide voice custom care expert (WWCCE) system. In particular, voice recognition system 14 may include a real-time translation system, which enables voice recognition system 14 to receive and provide voice messages in a specific language.
[0035] Other voice recognition systems that may be employed are systems powered by artificial intelligence, such as Google Assistant or Amazon Alexa. In that case, voice recognition system 14 may be appropriately configured for the desired application using tools such as Google Action or Amazon Alexa Skill.
[0036] Further, in different embodiments, the software powering voice recognition system 14 may be stored in a dedicated piece of hardware that is part of or connected to computer server 10, or may be cloud-based.
[0037] Operator 15 is also provided with augmented reality (AR) equipment 16, which is configured to also communicate with computer server 10. An example of AR equipment 16 is smart glasses, which may be provided in different forms, for example, as glasses 17 having a camera 18 configured to communicate wirelessly with computer server 10.
[0038] In different embodiments, smart glasses 17 may be configured to simply provide imagery to computer server 10 or to also add a layer of interactive digital information on top of what is already seen in the physical world, for a purpose that will be discussed in greater detail later. Smart glasses 17 may also be tethered to an electronic device such as a smartphone to provide a display on that electronic device or receive an input through that electronic device. In one embodiment, the added image layer or the display on the tethered electronic device may be holographic, as will also be discussed later.
[0039] Examples of AR equipment include Google Glass, Microsoft Hololens, and Toshiba dynaEdge, to mention but a few.
[0040] In a method according to the invention, operator 15 connects to computer server 10 both via voice, by providing a vocal input to voice recognition system 14, and via AR equipment 16, by inputting imagery into computer server 10 that illustrates the machine malfunction. That imagery may have been prerecorded, by capturing images representative of the machine malfunction, storing those images, and eventually transmitting those images to computer server 10; or may be real life images captured while operator 15 is connected to computer server 10. Further, the imagery provided to computer server 10 may be in the form of still images or in video form.
[0041] Therefore, the transmission of voice and image information by operator 15 to computer server 10 may occur in different manners: by having operator 15 capture and store machine images in advance, and download those images into computer server 10 when the voice input is also provided; or, otherwise, by having operator 15 capture images representative of the machine malfunction in real time and, at the same time, provide the voice input to voice recognition system 14 by using a portable device, such as a portable Amazon Alexa device or a smartphone connected to voice recognition system 14.
[0042] Upon receiving the voice-based and image-based inputs from user 15, or immediately thereafter, computer server 10 elaborates the information provided by user 15 to determine the intent of user 15, and to develop troubleshooting information based on the received input. As described previously, computer server 10 may interact with operator 15 to better clarify operator 15’s intent by asking clarifying questions and by providing targeted examples. The troubleshooting information provided by computer server 10 may be developed by having computer server 10 access database 11 and having computer server 10 retrieve troubleshooting information from database 1 1 , for example using standardized queries.
[0043] In one non-limiting example, the machine malfunction may be related to the improper deposition of conductive ink by an ink deposition machine for the production of a printed electronics, where well defined, straight print edges are required and are instead not being produced.
[0044] To resolve that malfunction, operator 15 may provide a voice input to voice recognition system 14 that explains the problem of improper ink deposition and provides specific, detailed information on the problem. Alternatively, operator 15 may provide a text input (for example, when it may be problematic for operator 15 to communicate in a language recognized by voice recognition system 14) using a text to speech system, such as Google WaveNet or Amazon NTTS (Neural Text To Speech).
[0045] In addition to the voice input, operator 15 may also provide computer server 10 with a visual input using real time or prerecorded images that illustrate the problem, for example, the print edges of unacceptable quality and close-up images of the printing head. The visual input may also include other images, such as images of other machine parts, warning lights appearing on the machine that indicate a machine malfunction in being, error codes provided by the machine or by ad hoc equipment, machine displays, or even the potential development of a malfunction when tolerance ranges are being approached or exceeded.
[0046] In one embodiment of the invention, AR equipment 16 is also connected to a dashboard 19, so that the interaction between AR equipment 16 and computer server 16 may occur via dashboard 19.
[0047] In response to the received input or inputs, computer server 15 queries data base 1 1. Such query may be in the form of a standard consultation seeking solutions to the identified problem through a word or word group search, in a manner similar to consulting a troubleshooting manual where standardized answers are provided for predetermined problems.
[0048] In one embodiment of the invention, computer server 15 may also cross-reference among different portions of database 11 using artificial intelligence to develop solutions more closely related to the problem presented by operator 15 than the standardized solutions provided in the manner of chapters of a troubleshooting manual.
[0049] In the above described example, computer server 10 may analyze the received images and cross-reference among possible solutions listed in database 11 to determine whether the problem of poorly defined, uneven print edges is due to an incorrect movement of the printing head, to a clogging of the printing head that causes an uneven ink output, or to an improper density of the ink that causes the ink to spread undesirably, so as to provide a response to operator 15 which is more specifically tailored to the situation at hand.
[0050] The response of computer server 10 to operator 15 may be provided in various manners. In one embodiment, computer server 15 provides a vocal response, by which computer server 10 provides troubleshooting information or instructions to operator 15 via artificial voice, using voice recognition system 14. Even in this case, the artificial voice may be provided with a text to speech system, such as Google WaveNet or Amazon NTTS.
[0051] In a different embodiment, the response may be provided by computer server 10 through AR equipment 16, for example, by causing digital imagery to appear or to be overlapped as a layer over the real image seen in AR equipment 16. The digital imagery may include a display of a part of interest of machine 13, and provide identification of the type of machine part and troubleshooting information on that machine part. The digital imagery may be in two- dimensional form, in the form of a digital image, possibly with written or other information overlapped over the digital image; in holographic form; or in video form.
[0052] Still referring to the preceding example, the digital imagery may display the ink deposition head, identify the ink deposition head by part number, and show repair or fine-tuning information by pointing to specific portions of the ink deposition head or by providing a sequence of images. For example, the provided images may display a state of the ink deposition head that the operator should achieve or show how the ink deposition head should be cleaned using a series of images.
[0053] In different embodiments, the troubleshooting information in visual form may be displayed on a device connected to AR equipment 16, for example, a smartphone, a tablet, a personal computer, or on dashboard 19.
[0054] In still another embodiment, computer server 10 may provide the troubleshooting information both via voice recognition system 14 and AR equipment 16.
[0055] A method according to the invention may include the additional step of a having computer server 10 record and store, either internally or externally (for example, in cloud 12) a maintenance record of machine 13, and to provide operator 15 with a maintenance schedule and maintenance guidelines for machine 15. Among other things, the maintenance guidelines may include observing and testing certain parts of machine 15 which have a history of problems or which computer server 15 may forecast to be prone to developing problems, on the basis both of the history of machine 15 or of the histories of other machines of the same kind stored in database 11.
[0056] For example, upon occurrence of a certain type of problem in machine 13 serviced by computer server 10, computer server 10 may update the records and maintenance schedules of machine 13, and alert the operators of all other machines of the same kind in order to prevent the same problem from occurring in those other machines. Computer server 10 may also update the records and maintenance schedules of all those other machines.
[0057] While the invention has been described in connection with the above described embodiments, it is not intended to limit the scope of the invention to the particular forms set forth, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the invention. Further, the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and the scope of the present invention is limited only by the appended claims.

Claims

CLAIMS A method of computer-aided machine repair comprising:
Providing a computer server (10) having a database (11) stored therein or connected thereto, the database comprising repair information on a machine (13);
Providing an operator with augmented reality equipment (16) configured to be connected to the computer server;
Connecting the operator to the computer server (10) via a voice recognition system (14) that inputs machine malfunction information into the computer server using a vocal input by the operator;
Connecting the augmented reality equipment (16) with the computer server (10) and inputting images into the computer server, the images being provided by the augmented reality equipment and illustrating the machine malfunction;
Causing the computer server to determine operator intent and to elaborate the machine malfunction information provided by the voice recognition system and by the images, so as to develop troubleshooting information of the machine; and
Causing the computer server to provide the troubleshooting information to the operator via one or both of the voice recognition system or the augmented reality equipment. The method according to claim 1, wherein the repair information is specific to a predetermined machine or class of machines. The method according to claim 1, wherein the augmented reality equipment comprises smart glasses (17) having a camera (18) connected wirelessly to the computer server. The method according to claim 1, wherein the voice recognition system is powered with artificial intelligence and is configured to provide vocal responses to the operator. The method according to claim 1, wherein the voice recognition system is a natural language understanding (NLU) voice recognition system, the NLU voice recognition system being trained for a specific domain to understand the vocal input related to a specific context and further being self-training to improve understanding of the vocal input over time. The method according to claim 1, wherein the voice recognition system is further configured to provide a vocal response to the operator, the vocal response comprising requesting additional information from the operator and providing the troubleshooting information to the operator. The method according to claim 1, wherein the images provided to the computer server comprise images of portions of the machine, the portions of the machine comprising one or more of machine parts, warning lights, error codes, or machine displays. The method according to claim 1, wherein the augmented reality equipment is further connected to an electronic device configured to display the troubleshooting information, the electronic device comprising a smartphone, a tablet, or a personal computer. The method according to claim 1, wherein the augmented reality equipment is further configured to provide a display of a machine part of interest and provide identification information of the machine part of interest and the troubleshooting information specific to the machine part of interest. The method according to claim 9, wherein the machine part is displayed in holographic form or in video form. The method according to claim 1, wherein the voice recognition system comprises a text to speech system configured to transform into speech text information received from the computer server. The method according to claim 1, wherein the voice recognition system is configured to recognize voice commands received from a plurality of users through a world wide voice custom care expert (WWCCE) system. The method according to claim 1, wherein the operator is further connected to the computer server via a dashboard, so as to enable the operator to provide some or all of the machine information system via the dashboard. The method according to claim 1, further comprising the step of causing the computer server to record and store, either internally or externally, a maintenance record of the machine, and to provide the operator with maintenance guidelines for the machine.
PCT/IT2020/000077 2020-11-17 2020-11-17 Method of computer-aided machine repair WO2022107174A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IT2020/000077 WO2022107174A1 (en) 2020-11-17 2020-11-17 Method of computer-aided machine repair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IT2020/000077 WO2022107174A1 (en) 2020-11-17 2020-11-17 Method of computer-aided machine repair

Publications (1)

Publication Number Publication Date
WO2022107174A1 true WO2022107174A1 (en) 2022-05-27

Family

ID=73856247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IT2020/000077 WO2022107174A1 (en) 2020-11-17 2020-11-17 Method of computer-aided machine repair

Country Status (1)

Country Link
WO (1) WO2022107174A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004070460A1 (en) 2003-01-06 2004-08-19 Optogone Device for controlling polarization of a light signal using a polymer-stabilized liquid crystal material
CN107861611A (en) * 2016-09-22 2018-03-30 天津康途科技有限公司 A kind of Elevator maintenance system based on augmented reality
KR101990284B1 (en) * 2018-12-13 2019-06-18 주식회사 버넥트 Intelligent cognitive technology based augmented reality system using speech recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004070460A1 (en) 2003-01-06 2004-08-19 Optogone Device for controlling polarization of a light signal using a polymer-stabilized liquid crystal material
CN107861611A (en) * 2016-09-22 2018-03-30 天津康途科技有限公司 A kind of Elevator maintenance system based on augmented reality
KR101990284B1 (en) * 2018-12-13 2019-06-18 주식회사 버넥트 Intelligent cognitive technology based augmented reality system using speech recognition

Similar Documents

Publication Publication Date Title
US11238228B2 (en) Training systems for pseudo labeling natural language
US20220043983A1 (en) OMNICHANNEL DATA COMMUNICATIONS SYSTEM USING ARTIFICIAL INTELLIGENCE (Al) BASED MACHINE LEARNING AND PREDICTIVE ANALYSIS
AU2021226758B2 (en) Intent analysis for call center response generation
CN109859829B (en) Medical equipment repair system based on WeChat public platform and operation method thereof
WO2019164815A1 (en) Interactive digital twin
US20200211699A1 (en) Imaging modality maintenance smart dispatch systems and methods
US20060256953A1 (en) Method and system for improving workforce performance in a contact center
US11120798B2 (en) Voice interface system for facilitating anonymized team feedback for a team health monitor
DE102019134312A1 (en) SYSTEMS, METHODS AND DEVICES FOR IMPROVING PROCESS CONTROL WITH A VIRTUAL ASSISTANT
US20200210966A1 (en) Imaging Modality Maintenance Care Package Systems and Methods
US11226801B2 (en) System and methods for voice controlled automated computer code deployment
US20220058589A1 (en) Methods, systems and computer program products for management of work shift handover reports in industrial plants
US10846342B2 (en) Artificial intelligence-assisted information technology data management and natural language playbook system
US20210248525A1 (en) Method and system for managing a technical installation
CN111402071B (en) Intelligent customer service robot system and equipment for insurance industry
US11438283B1 (en) Intelligent conversational systems
KR102073069B1 (en) Pc as management system
CA3203601A1 (en) Systems and methods for application accessibility testing with assistive learning
CA3021724C (en) Cloud-based system and method to track and manage objects
WO2022107174A1 (en) Method of computer-aided machine repair
WO2013116461A1 (en) Systems and methods for voice-guided operations
US20230350398A1 (en) Natural input processing for machine diagnostics
US20240020291A1 (en) System and method for data quality framework and structure
US20230409293A1 (en) System and method for developing an artificial specific intelligence (asi) interface for a specific software
US20210398624A1 (en) Systems and methods for automated intake of patient data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20828355

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20828355

Country of ref document: EP

Kind code of ref document: A1