EP3528244B1 - Bildverarbeitungsvorrichtung, verfahren zur steuerung der bildverarbeitungsvorrichtung und programm - Google Patents

Bildverarbeitungsvorrichtung, verfahren zur steuerung der bildverarbeitungsvorrichtung und programm Download PDF

Info

Publication number
EP3528244B1
EP3528244B1 EP19158045.5A EP19158045A EP3528244B1 EP 3528244 B1 EP3528244 B1 EP 3528244B1 EP 19158045 A EP19158045 A EP 19158045A EP 3528244 B1 EP3528244 B1 EP 3528244B1
Authority
EP
European Patent Office
Prior art keywords
screen
voice
search
command group
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19158045.5A
Other languages
English (en)
French (fr)
Other versions
EP3528244A1 (de
Inventor
Hozuma Nakajima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Inc
Original Assignee
Konica Minolta Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Inc filed Critical Konica Minolta Inc
Publication of EP3528244A1 publication Critical patent/EP3528244A1/de
Application granted granted Critical
Publication of EP3528244B1 publication Critical patent/EP3528244B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00352Input means
    • H04N1/00397Switches, knobs or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00352Input means
    • H04N1/00403Voice input means, e.g. voice commands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/00411Display of information to the user, e.g. menus the display also being used for user input, e.g. touch screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/00413Display of information to the user, e.g. menus using menus, i.e. presenting the user with a plurality of selectable options
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00474Output means outputting a plurality of functional options, e.g. scan, copy or print
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00482Output means outputting a plurality of job set-up options, e.g. number of copies, paper size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00501Tailoring a user interface [UI] to specific requirements
    • H04N1/00509Personalising for a particular user or group of users, e.g. a workgroup or company
    • H04N1/00514Personalising for a particular user or group of users, e.g. a workgroup or company for individual users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00912Arrangements for controlling a still picture apparatus or components thereof not otherwise provided for
    • H04N1/00915Assigning priority to, or interrupting, a particular operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00962Input arrangements for operating instructions or parameters, e.g. updating internal software
    • H04N1/0097Storage of instructions or parameters, e.g. customised instructions or different parameters for different user IDs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • the present invention relates to an image processing device such as a multi-functional peripheral (MFP), and a technology related thereto.
  • MFP multi-functional peripheral
  • JP 2011-049705 A by using, as a search range, a keyword group (voice operation command group) predetermined corresponding to a screen of a current layer, processing of searching for voice recognition data pertaining to a result of recognizing voice data within the search range is executed.
  • a keyword group voice operation command group
  • a screen of a current layer is displayed together with another screen serving as a caller of the screen of the current layer (also referred to as a screen serving as a caller) (in other words, a screen that has been most recently called is displayed so as to be superimposed on a screen serving as a caller of the most recently called screen).
  • a search range of voice recognition data is only a keyword group (voice operation command group) predetermined corresponding to the screen of the current layer. Therefore, an instruction of an operation button in a screen (the screen serving as the caller) other than the screen of the current layer cannot be given.
  • the search range of the voice recognition data is always fixed to the voice operation command group related to one layer screen (here, the current layer screen) between two operation screens. Therefore, a voice operation command related to the other layer screen between the two operation screens cannot be detected.
  • an object of the present invention is to provide a technology that enables to properly detect one voice operation command corresponding to user's voice input from among a plurality of voice operation commands related to a plurality of operation screens.
  • the invention provides an image processing device in accordance with independent claim 1.
  • the invention provides a method in accordance with independent claim 14.
  • the invention provides a program in accordance with independent claim 15. Further aspects of the invention are set forth in the dependent claims, the drawings and the following description.
  • Fig. 1 is a front view illustrating an external appearance of an image processing device according to a first embodiment of the present invention.
  • an MFP 10 is presented as the image processing device.
  • Fig. 2 is a diagram illustrating functional blocks of the multi-functional peripheral (MFP) 10.
  • MFP multi-functional peripheral
  • the MFP 10 is a device (also referred to as "complex machine") that is provided with a scanning function, a copy function, a facsimile function, a box storing function, and the like. Specifically, as shown in a functional block diagram of Fig. 2 , the MFP 10 is provided with an image reading part 2, a print output part 3, a communication part 4, a storage part 5, an operation part 6, a controller 9, and the like. The MFP 10 realizes various functions by causing each of these parts to operate in a multiple manner. It should be noted that the MFP 10 is also expressed as an image forming device or the like.
  • the image reading part 2 is a processing part that optically reads (that is to say, scans) an original document placed at a predetermined position of the MFP 10, and generates image data of the original document (also referred to as "original document image” or "scanned image”).
  • This image reading part 2 is also referred to as a scanning part.
  • the print output part 3 is an output part that prints out an image to various media such as paper on the basis of data related to a print target.
  • This MFP 10 also serves as an electrophotographic printer (full-color printer).
  • the print output part 3 includes various hardware mechanisms (also referred to as "image forming mechanism” or “printout mechanism”) such as an exposure part, a development part, a transfer part, and a fixing part.
  • the communication part 4 is a processing part that is capable of performing facsimile communication through a public line or the like. Moreover, the communication part 4 is also capable of performing network communication through a network. This network communication uses various protocols such as, for example, Transmission Control Protocol/Internet Protocol (TCP/IP). Using the network communication enables the MFP 10 to give and receive various data to/from a desired destination.
  • the communication part 4 includes: a transmission part 4a that transmits various data; and a receiving part 4b that receives various data.
  • the storage part 5 is formed by a storage device such as a hard disk drive (HDD).
  • HDD hard disk drive
  • the operation part 6 is provided with: an operation input part 6a that accepts operation input for the MFP 10; and a display part 6b that performs display output of various kinds of information.
  • This MFP 10 is provided with a substantially plate-like operation panel part 40 (refer to Fig. 1 ).
  • the operation panel part 40 has a touch panel 45 (refer to Fig. 1 ) on the front side thereof.
  • the touch panel (operation panel) 45 is formed by embedding a piezoelectric sensor or the like in a liquid crystal display panel.
  • the touch panel 45 is capable of displaying various kinds of information, and is capable of accepting operation input from an operator (operation input by operator's fingers). For example, various screens (including button images) such as a menu screen are displayed on the touch panel 45.
  • buttons expressed by button images (also referred to as “software buttons” or “software keys").
  • the touch panel 45 functions as a part of the operation input part 6a, and also functions as a part of the display part 6b.
  • the operation panel part 40 is also provided with hardware keys (hardware buttons) 41 to 44, and 46.
  • the controller (control part) 9 is a control device that is built into the MFP 10, and that controls the MFP 10 in a unified manner.
  • the controller 9 is formed as a computer system that is provided with a CPU, various semiconductor memories (an RAM and a ROM), and the like.
  • the controller 9 realizes various processing parts by executing, in a CPU, a predetermined software program (also referred to as merely a "program") stored in a ROM (for example, an EEPROM (registered trademark)).
  • a predetermined software program also referred to as merely a "program” stored in a ROM (for example, an EEPROM (registered trademark)).
  • the program in detail, a program module group
  • the program may be downloaded via a network or the like so as to be installed in the MFP 10.
  • the controller 9 executes the program to realize various processing parts including a communication control part 11, an input control part 12, a display control part 13, a voice recognition processing part 14, an obtaining part 15, a determination part 16, a search part 17, and a command execution part 18.
  • the communication control part 11 is a processing part that controls communication operation with other devices.
  • the input control part 12 is a control part that controls operation input operation for the operation input part 6a.
  • the input control part 12 controls operation of accepting operation input into an operation screen (also referred to as "operation screen area").
  • the display control part 13 is a processing part that controls display operation in the display part 6b.
  • the display control part 13 causes the display part 6b to display, for example, an operation screen for operating the MFP 10.
  • the voice recognition processing part 14 is a processing part that executes voice recognition processing related to a voice (voice input) vocalized by a user. It should be noted that the voice recognition processing part 14 operates as a part of the operation part 6.
  • the obtaining part 15 is a processing part that obtains voice recognition data (text data) that is a voice recognition result related to the voice input.
  • the obtaining part 15 obtains a result of the voice recognition or the like by the voice recognition processing part 14.
  • the determination part 16 is a processing part that determines a search target character string on the basis of voice recognition data.
  • the search part 17 is a processing part that executes search processing of searching for one voice operation command (text data) that agrees with the search target character string from among a plurality of voice operation commands.
  • the command execution part 18 is a processing part that executes processing (various setting processing and/or job execution processing, and the like) according to the one voice operation command searched for by the search part 17.
  • the search part 17 gives the priority order to each of a plurality of command groups that include the first command group M1 and the second command group M2, selected from among a plurality of voice operation commands, and executes search processing in which a search range is each command group according to the priority order given to the corresponding command group.
  • the search processing is executed, for example, in two stages. Specifically, first of all, the search part 17 executes first search processing in which a search range is a first command group M1 (also referred to as "first operation command group") that has been narrowed down from among a plurality of voice operation commands (for example, a plurality of voice operation commands related to a plurality of operation screens that are currently being displayed) according to a predetermined criterion. After that, in a case where the search target character string is not detected by the first search processing in which the search range is the first command group M1, the search part 17 executes second search processing in which a search range is a second command group M2 (also referred to as "second operation command group”) selected from among the plurality of voice operation commands.
  • the first command group M1 is also expressed as a voice operation command group to which the first search priority order has been given; and the second command group M2 is also expressed as a voice operation command group to which the second search priority order has been given.
  • the plurality of voice operation commands related to an operation screen (210 and the like) that is currently being displayed can include not only commands corresponding to operation by software keys (keys displayed on the touch panel 45) in the operation screen, but also commands (start job execution/stop job execution and the like) corresponding to operation by hardware keys (a start key/a stop key and the like).
  • a voice operation command group related to each screen is set (registered) beforehand by being associated with the each screen.
  • a plurality of voice operation commands such as that shown in Fig. 8 are registered beforehand as a voice operation command group 610 related to the basic menu screen 210 (refer to Fig. 5 )
  • a plurality of voice operation commands such as that shown in Fig. 9 are registered beforehand as a voice operation command group 630 related to a magnification ratio setting screen 230 (refer to Fig. 7 ).
  • a command dictionary text dictionary in which the voice operation command groups 610, 630, and the like are registered beforehand is stored in the storage part 5 beforehand.
  • a plurality of voice operation commands including "GENKO GASHITSU (original-document image quality)", “KARA (color)”, “NODO (density)”, “YOSHI (paper)”, “BAIRITSU (magnification ratio)”, “RYOMEN/PEJI SYUYAKU (double-sided/page aggregation)", “SHIAGARI (finish)”, “MOJI SYASHIN (character photo)”, “OTO KARA (auto color)”, “FUTSU (ordinary)”, “JIDO (automatic)”, and “HYAKU PASENTO (100%)” are registered.
  • Each voice operation command is expressed as text data indicating each operation instruction.
  • voice operation commands are registered by being associated with operation keys 211 to 217 and the like (also refer to Fig. 5 ) related to the basic menu screen 210.
  • the voice operation commands "GENKO GASHITSU (original-document image quality)” and “MOJI SYASHIN (character photo)” are each associated with an operation key “original-document image quality” (software key 211) in a "copy base” screen (the basic menu screen 210) (in detail, a "base screen area” group of the basic menu screen 210).
  • voice operation commands "YOSHI (paper)” and “JIDO (automatic)” are each associated with an operation key “paper” (software key 214) in the "copy basic” screen (the basic menu screen 210) (in detail, the "base screen area” group of the basic menu screen 210).
  • the voice operation commands "BAIRITSU (magnification ratio)” and “HYAKU PASENTO (100%)” are associated with an operation key “magnification ratio” (software key 215) in the "copy basic” screen (the basic menu screen 210) (in detail, the "base screen area” group of the basic menu screen 210).
  • each of the other voice operation commands is also associated with any of the other operation keys (212, 213, 216, 217, and the like).
  • a plurality of voice operation commands including "JIDO (automatic)", “CHISAME (smallish)”, “PURASU (plus)”, “MAINASU (minus)”, “GOJYU PASENTO (50%)", “NANAJYUTTEN NANA PASENTO (70.7%)”, “HACHIJYUICHITEN ROKU PASENTO (81.6%)”, “HACHIJYUROKUTEN ROKU PASENTO (86.6%)”, and "HYAKU PASENTO/TOUBAI (100%/non-magnified)" are registered.
  • voice operation commands are registered by being associated with, for example, operation keys 231 to 237 and 241 to 249 (software keys) related to the magnification ratio setting screen 230.
  • the voice operation command "JIDO (automatic)" is associated with an operation key “automatic” (the software key 231) in the "copy magnification ratio” screen (the magnification ratio setting screen 230) (in detail, the "base screen area” group of the magnification ratio setting screen 230).
  • each of the other voice operation commands is also associated with any of the operation keys 232 to 237, 241 to 249, and the like.
  • the MFP 10 executes operation similar to that executed when each operation key corresponding to the each voice operation command is pressed. Specifically, the MFP 10 (spuriously) generates the same operation event (internal event) as that generated when the each operation key is pressed, and realizes operation similar to that executed when the each operation key is pressed.
  • each voice operation command may be registered by being associated with an ID of the operation key (refer to Fig. 10 ), or may be registered by being associated with position information of the operation key (for example, coordinate values, in a screen, of a representative position of the operation key) (refer to Fig. 11 ).
  • each operation key may be identified by an ID (identifier) that has been given thereto beforehand, or may be identified by coordinate values in the screen.
  • the MFP 10 may execute processing that is identified by the ID of the operation key (for example, application programming interface (API) execution processing associated with the ID of the operation key), or may execute processing assuming that pressing operation for representative coordinate values of the operation key has been given.
  • API application programming interface
  • the voice operation commands related to the basic menu screen 210 may include not only the voice operation commands registered by being associated with the software keys (software buttons) displayed on the basic menu screen 210, but also voice operation commands registered by being associated with hardware keys (hardware buttons) (41, 42, 43, 44, 46) (refer to Fig. 1 ) provided on the operation panel part 40.
  • a voice operation command "SUTATO (start)” may be further registered by being associated with the start key (start button) 41
  • a voice operation command "SUTOPPU (stop)” may be further registered by being associated with the stop key (stop button) 42 (refer to Fig. 32 ).
  • a voice operation command "RISETTO (reset)” may be registered by being associated with the reset key (reset button) 43; and a voice operation command “HOMU (home)” may be registered by being associated with a home key (home button) 44.
  • a voice operation command "TENKI (numeric keypad)” may be registered by being associated with a numeric keypad call button 46.
  • voice operation commands related to the magnification ratio setting screen 230 may include those registered by being associated with the hardware keys (41, 42, 43, 44, 46, and the like) in a similar manner.
  • Fig. 3 is a conceptual diagram illustrating an outline of the operation according to the first embodiment
  • Fig. 4 is a flowchart illustrating the operation according to the first embodiment.
  • Fig. 5 is a diagram illustrating the basic menu screen 210 related to a copy job
  • Fig. 6 is a diagram illustrating a state in which the detail setting screen 230 (also referred to as "magnification ratio setting screen") related to the copy magnification ratio is displayed so as to be superimposed on the basic menu screen 210.
  • Fig. 7 is a diagram that extracts and illustrates only the magnification ratio setting screen 230.
  • the magnification ratio setting screen 230 is a detail setting screen that is displayed according to user's operation for the basic menu screen 210 (for example, pressing of a magnification-ratio setting button 215 in the basic menu screen 210). It should be noted that as shown in Figs. 5 and 6 , the plurality of operation keys 211 to 217 (software keys) are displayed in the basic menu screen 210, and the plurality of operation keys 231 to 237, 241 to 249 (software keys), and the like are displayed in the magnification ratio setting screen 230.
  • search processing processing of searching for a voice recognition result of a user in which a search range is only the voice operation command group 610 is executed. Specifically, the voice recognition result of the user is searched for from among the voice operation command group 610 that is registered by being associated with the basic menu screen 210.
  • a display state of the touch panel 45 changes to a state such as that shown in Fig. 6 .
  • the magnification ratio setting screen 230 (sub-screen) called from the basic menu screen 210 (main screen) is displayed on the touch panel 45 together with the basic menu screen 210.
  • both of the two operation screens 210 and 230 that differ in layer from each other are displayed on the touch panel 45.
  • each voice operation command (“JIDO (automatic)” and the like) related to the sub-screen (the magnification ratio setting screen 230) can be searched for
  • a voice operation command that is associated with each button in the main screen (the basic menu screen 210) for example, the voice operation command "GENKO GASHITSU (original-document image quality)” or the like that is associated with the button 211 for setting the original-document image quality in the basic menu screen 210) cannot be searched for.
  • operation the search target of which is a voice operation command group related to both of the two operation screens 230 and 210, is executed.
  • first search processing in which a search range is a first command group M1 that has been narrowed down from among voice operation commands of the plurality of voice operation command groups 610 and 630 according to a predetermined criterion (described next) is executed (also refer to Fig. 3 ).
  • a predetermined criterion is whether or not it is a voice operation command group related to a screen (logically, a screen of the lowest layer) that has been most recently (lastly) called (until a voice for voice operation is vocalized) among at least one operation screen that is currently (in detail, at the time of vocalizing the voice for voice operation) displayed.
  • the voice operation command group 630 related to the screen 230 that has been most recently called (most recently displayed) between the two operation screens 210 and 230 that are currently displayed is determined as the first command group M1.
  • the first priority order is given to the voice operation command group 630 (the first command group M1), and search processing for the first command group M1 (630) to which the first priority order has been given is first executed.
  • the operation screen 230 is a screen that is displayed most frontward between the two operation screens, and is also designated as a screen displayed as the highest layer.
  • the operation screen 230 is also designated as a first priority screen.
  • processing corresponding to the one voice operation command "GOJYU PASENTO (50%)” is executed. Specifically, setting processing of setting a copy magnification ratio at "50%” (processing corresponding to operation of pressing the button 241 in the magnification ratio setting screen 230) is executed.
  • the voice operation command group 610 other than the first command group M1 between the plurality of voice operation command groups 610 and 630 is determined as the second command group M2, and second search processing in which a search range (also referred to as "second search range") is the second command group M2 is executed.
  • the second priority order is given to the voice operation command group 610 (the second command group M2), and search processing for the second command group M2 (610) to which the second priority order has been given is executed.
  • the operation screen 210 is a screen corresponding to the voice operation command group 610 to which the second priority order has been given, the operation screen 210 is also designated as a second priority screen.
  • processing corresponding to the one voice operation command "GENKO GASHITSU (original-document image quality)" is executed. Specifically, processing corresponding to operation of pressing the button 211 in the basic menu screen 210 is executed. More specifically, processing of displaying a detail setting screen 220 (not illustrated) related to the original-document image quality so as to be superimposed on the basic menu screen 210 is executed.
  • such operation enables even a voice operation command (for example, "GENKO GASHITSU (original-document image quality)”) that agrees with any of voice operation commands of the voice operation command group 610 related to the operation screen (the operation screen serving as a caller) 210 other than the operation screen 230 that has been most recently called to be searched for. Therefore, one voice operation command corresponding to a user's voice for operation can be properly detected from among the plurality of voice operation commands related to the plurality of operation screens.
  • GENKO GASHITSU original-document image quality
  • the search processing related to the two operation screens 210 and 230 is performed in two stages, efficient search processing can be performed.
  • the first search processing in which a search range is the voice operation command group 630 related to the operation screen 230 that has been most recently called is first performed, and in a case where the first search processing does not succeed, the second search processing in which a search range is the voice operation command group 610 related to the other screen 210 is performed. Consequently, search processing in which a search range is the voice operation command group 630 having a higher possibility of being vocalized as a voice for operation, between the two voice operation command groups 610 and 630, is performed earlier, and subsequently, search processing in which a search range is the other voice operation command group 610 is performed. Therefore, efficient search processing can be performed.
  • a technique in which a user's voice recognition result is searched for all at once (without distinction of search range) from both of the voice operation command group 610 that is registered by being associated with the basic menu screen 210 and the voice operation command group 630 that is registered by being associated with the magnification ratio setting screen 230 can be used.
  • one voice operation command (for example, former) that does not agree with user's intention may be first detected, with the result that processing corresponding to the one voice operation command is executed.
  • the voice operation command may be misrecognized as an instruction given by pressing the button 214 in the layer screen (basic menu screen) 210 serving as a caller that has called the current layer screen.
  • the operation of searching for a user's voice recognition result not all at once but in two stages is executed.
  • the priority order is given, on a command group basis, to the plurality of voice operation commands that include the first command group M1 (630) and the second command group M2 (610), and search processing in which a search range is each command group is executed according to the priority order given to the corresponding command group.
  • search processing (first search processing) of the first order is first executed, and in a case where one voice operation command that agrees with a search target character string has not been detected by the first search processing, search processing (second search processing) of the second order is executed.
  • search processing of the first order is executed within a search range of the voice operation command group 630 related to the screen displayed as the highest layer.
  • the detection result in the first search processing is employed by priority.
  • the search processing of the first order is executed within a search range of the command group 630 related to the screen of the logically lowest layer (the screen displayed as the highest layer) 230.
  • a user often performs voice operation related to an operation screen (here, the screen 230) that is currently displayed, and that has been most recently called. Therefore, if search processing is executed with the first priority order given to the voice operation command group 630 related to the operation screen 230, there is a high possibility that a voice operation command that agrees with user's intention will be detected. In its turn, it is possible to properly execute voice operation that agrees with user's intention.
  • Fig. 4 is a flowchart illustrating operation of the MFP 10.
  • step S11 When display contents on the touch panel 45 change during a standby state in a standby loop from step S11 to step S21, the process proceeds from step S11 to step S12.
  • the MFP 10 obtains not only the command group 610 (voice operation command group related to the basic menu screen 210) that has already been obtained until the change, but also another command group 630 (voice operation command group related to the magnification ratio setting screen 230) (step S12). Consequently, a command group 600 (601) that includes both of the voice operation command groups 610 and 630 is formed (refer to Figs. 8 and 9 ). It should be noted that as shown in Figs.
  • each of the command groups 600, 610, and 630 is a complex composed of, for example, text data (voice operation commands) related to voice operation.
  • step S21 when user's voice input is accepted in a state in which the two operation screens 210 and 230 are displayed on the touch panel 45, the process proceeds from step S21 to step S22.
  • step S22 the voice recognition processing part 14 of the MFP 10 executes voice recognition processing related to the user's voice input.
  • the obtaining part 15 of the MFP 10 obtains voice recognition data (text data), which is a processing result of the voice recognition processing, from the voice recognition processing part 14.
  • voice recognition data text data
  • a voice recognition result related to a user's voice (accepted voice input) that has been vocalized in a state in which the two operation screens 210 and 230 are both displayed on the touch panel 45 is obtained.
  • the MFP 10 determines a search target character string on the basis of voice recognition data.
  • a character string "GENKO GASHITSU (original-document image quality)" of the voice recognition data is determined as a search target character string without any change.
  • "GENKO GASHITSU (original-document image quality)” may be determined as a search target character string by excluding "ETO” (character string registered beforehand as a word having no meaning) from a character string "ETO, GENKO GASHITSU (well, original-document image quality)" of the voice recognition data.
  • the voice operation command group 630 related to the magnification ratio setting screen 230 is determined (set) as the first command group M1 (the first search range).
  • data records data records corresponding to the magnification ratio setting screen 230
  • each of which prescribes "copy magnification ratio” as a field value of the field “screen” are extracted (by being narrowed down) from among a plurality of data records (data group in which data of each row is one unit (data record)). Consequently, the voice operation command group 630 of Fig. 9 is extracted as the first command group M1.
  • step S24 the first search processing of searching for a search target character string from the first search range is executed.
  • step S25 a determination is made as to whether or not one voice operation command that agrees with the search target character string has been detected in the first search processing.
  • step S30 processing corresponding to the one voice operation command is executed.
  • step S25 in a case where it is determined that one voice operation command that agrees with the search target character string has not been detected in the first search processing, the process proceeds from step S25 to step S26.
  • the voice operation command group 610 related to the basic menu screen 210 is determined (set) as the second command group M2 (the second search range).
  • the voice operation command group 600 (refer to Figs. 8 and 9 ), data records (data records corresponding to the basic menu screen 210), each of which prescribes "copy basic” as a field value of the field "screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, the voice operation command group 610 of Fig. 8 is extracted as the second command group M2. Alternatively, a remaining command group obtained by excluding the voice operation command group 630 from the voice operation command group 600 may be extracted as the second command group M2.
  • step S24 the second search processing of searching for a search target character string from the second search range is executed.
  • next step S25 a determination is made as to whether or not one voice operation command that agrees with the search target character string has been detected in the second search processing.
  • step S30 processing corresponding to the one voice operation command is executed.
  • step S25 in a case where it is determined that one voice operation command that agrees with the search target character string has not been detected in the second search processing, the process proceeds from step S25 to step S26.
  • step S29 error processing (for example, processing of displaying a notification that one voice operation command corresponding to the input voice could not be searched for) is executed.
  • the operation according to the first embodiment is executed in this manner.
  • a change to the display state of Fig. 6 is made in response to the operation in which the magnification-ratio setting button 215 in the basic menu screen 210 has been pressed in the display state of Fig. 5 .
  • the present invention is not limited to the above.
  • a change to the display state of Fig. 6 may be made in response to the operation in which user's voice input "BAIRITSU (magnification ratio)" has been accepted in the display state of Fig. 5 .
  • voice recognition data "BAIRITSU (magnification ratio)" has only to be searched for with only the voice operation command group 610 ( Fig. 8 ) searched as a search target.
  • search processing having a larger number of stages (three stages or more) may be performed.
  • a numeric keypad screen 250 (sub-screen) called from the basic menu screen 210 (main screen) is displayed on the touch panel 45 together with the basic menu screen 210.
  • the numeric keypad screen 250 is displayed so as to be superimposed on a part of the basic menu screen 210 (the numeric keypad screen 250 is displayed on the most frontward side).
  • numeric keypad call button 46 hardware key
  • TENKI numeric keypad
  • the MFP 10 obtains not only the command group 610 that has already been obtained until the change (the voice operation command group related to the basic menu screen 210), but also another command group 650 (the voice operation command group related to the numeric keypad screen 250) (refer to Fig. 13 ) (step S12). Consequently, the command group 600 (602) that includes both of the voice operation command groups 610 and 650 is formed (refer to Fig. 13 ). Incidentally, as shown in Fig.
  • the MFP 10 obtains voice recognition data, which is a processing result of voice recognition processing related to user's voice input, from the voice recognition processing part 14, and determines a search target character string (for example, "GO (5)") on the basis of the voice recognition data (steps S21, S22).
  • voice recognition data which is a processing result of voice recognition processing related to user's voice input
  • voice recognition processing part 14 determines a search target character string (for example, "GO (5)" on the basis of the voice recognition data (steps S21, S22).
  • the voice operation command group 650 related to the numeric keypad screen 250 is determined as the first command group M1, and the voice operation command group 650 is set as a search range (the first search range) (step S23).
  • the voice operation command group 602 (refer to Fig. 13 ), data records (data records corresponding to the numeric keypad screen 250), each of which prescribes "numeric keypad” as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 14 , the voice operation command group 650 is extracted as the first command group M1. In other words, the voice operation command group 650 related to the numeric keypad screen 250 is set as the first search range.
  • step S24 the first search processing of searching for a search target character string from the first search range is executed.
  • processing corresponding to the one voice operation command is executed (steps S25, S30).
  • the voice operation command group 610 related to the basic menu screen 210 is determined as the second command group M2, and the voice operation command group 610 is set as a search range (the second search range) (step S23).
  • the voice operation command group 602 (refer to Fig. 13 ), data records (data records corresponding to the basic menu screen 210), each of which prescribes "base screen area” as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, the voice operation command group 610 of Fig. 13 is extracted as the second command group M2. Alternatively, a remaining command group obtained by excluding the voice operation command group 650 from the voice operation command group 602 may be extracted as the second command group M2.
  • step S24 the second search processing of searching for a search target character string from the second search range is executed.
  • processing corresponding to the one voice operation command is executed (step S30).
  • the first priority order is given to the first command group M1 (650), and the second priority order is given to the second command group M2 (610), and subsequently, search processing in which a search range is each command group may be executed according to the priority order given to the corresponding command group.
  • Fig. 15 illustrates a detail setting screen 310 of a file format (PDF format) related to a scan job.
  • PDF format a file format
  • a plurality of software keys (buttons) 311 to 319 are displayed on the detail setting screen 310.
  • Fig. 16 illustrates a state in which a "stamp synthesis method" pull-down list (also referred to as “pull-down list screen”) 330 is displayed in response to pressing of a "stamp synthesis method” button 313 in the detail setting screen 310 ( Fig. 15 ).
  • the pull-down list 330 displays two options ("image” and "character”). Either of the two options can be set.
  • the MFP 10 obtains not only a command group 710 that has already been obtained until the change (the voice operation command group related to the detail setting screen 310), but also another command group 730 (the voice operation command group related to the pull-down list 330) (refer to Fig. 17 ) (step S12). Consequently, the command group 700 (701) that includes both of the voice operation command groups 710 and 730 is formed (refer to Fig. 17 ).
  • the MFP 10 obtains voice recognition data, which is a processing result of voice recognition processing related to user's voice input, from the voice recognition processing part 14, and determines a search target character string (for example, "MOJI (character)" on the basis of the voice recognition data (steps S21, S22).
  • voice recognition data which is a processing result of voice recognition processing related to user's voice input
  • voice recognition processing part 14 determines a search target character string (for example, "MOJI (character)" on the basis of the voice recognition data (steps S21, S22).
  • the voice operation command group 730 related to the pull-down list 330 (also refer to Fig. 18 ) is determined as the first command group M1, and the voice operation command group 730 is set as a search range (the first search range) (step S23).
  • the voice operation command group 701 (refer to Fig. 17 ), data records (data records corresponding to the pull-down list screen 330), each of which prescribes "pull-down area (stamp synthesis)" as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 18 , the voice operation command group 730 is extracted as the first command group M1. In other words, the voice operation command group 730 related to the pull-down list screen 330 is set as the first search range.
  • step S24 the first search processing of searching for a search target character string from the first search range is executed.
  • processing corresponding to the one voice operation command is executed (steps S25, S30).
  • the voice operation command group 710 related to the detail setting screen 310 (refer to Fig. 17 ) is determined as the second command group M2, and the second command group M2 is set as a search range (the second search range) (step S23).
  • the voice operation command group 701 (refer to Fig. 17 ), data records (data records corresponding to the detail setting screen 310), each of which prescribes "pull-down area (stamp synthesis)" as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records (data group of each row).
  • a remaining command group obtained by excluding the voice operation command group 730 from the voice operation command group 701 may be extracted as the second command group M2. Consequently, as shown in Fig. 17 , the voice operation command group 710 is extracted as the second command group M2.
  • the voice operation command group 730 related to the detail setting screen 310 is set as the second search range.
  • step S24 the second search processing of searching for a search target character string from the second search range is executed.
  • processing corresponding to the one voice operation command is executed.
  • the first priority order is given to the first command group M1 (730), and the second priority order is given to the second command group M2 (710), and subsequently, search processing in which a search range is each command group may be executed according to the priority order given to the corresponding command group.
  • the voice operation is performed only for the screens of the two layers (two screens), the screen on the most frontward side (the pull-down list 330) and the screen 310 serving as a caller that has called the screen (the pull-down list 330), is presented.
  • the voice operation may be performed for screens of three or more layers.
  • priority orders are given, respectively, to voice operation command groups corresponding to the respective screens of three or more layers, and search processing in which a search range is each command group has only to be executed according to the priority order given to the corresponding command group.
  • the first priority order is given to the first command group M1 (730), and the second priority order is given to the second command group M2 (710), in a manner similar to the above.
  • the third priority order has only to be given to the voice operation command group (not illustrated) corresponding to a screen 305 (not illustrated) serving as a caller that has called the detail setting screen 310.
  • search processing in which a search range is each command group has only to be executed according to the priority order given to the corresponding command group. It should be noted that in Figs. 15 and 16 , illustration of the screen 305 serving as a caller that has called the detail setting screen 310 is omitted.
  • the first search processing is performed by using, as a search target, the voice operation command group (M1) related to a screen displayed on the most frontward side (upper side)(also referred to as "screen area")
  • the second search processing is performed by using, as a search target, the whole voice operation command group (M2) related to a caller's screen area, the caller having called the screen area displayed on the most frontward side.
  • the second search processing may be performed by using, as a search target, a voice operation command group obtained by partially excluding voice operation commands from the voice operation command group M2.
  • Fig. 12 some software keys 216 and 217 among the plurality of software keys 211 to 217 in the basic menu screen 210 are covered by the numeric keypad screen 250 (in detail, a part of the numeric keypad screen 250) (hidden by the numeric keypad screen 250).
  • both of the two operation screens 210 and 250 are displayed on the touch panel 45, and at least a part of the basic menu screen 210 is in a state of being hidden by the numeric keypad screen 250.
  • some voice operation commands corresponding to the some software keys 216 and 217 may be excluded from the voice operation command group M2.
  • the second search processing may be performed by using, as a search target, a voice operation command group obtained by excluding the some voice operation commands.
  • search processing related to the second command group M2 may be executed in a state in which commands corresponding to operation keys (display elements) hidden by at least a part of the numeric keypad screen 250 are excluded from the second command group M2.
  • search processing related to the second command group M2 has only to be executed on the basis of setting contents pertaining to the setting change. Specifically, in a case where "to exclude” is set, search processing related to the second command group M2 has only to be executed in a state in which the command corresponding to the display element hidden by the second screen is excluded. In contrast, in a case where "not to exclude” is set, search processing related to the second command group M2 has only to be executed in a state in which the command corresponding to the display element hidden by the second screen is included.
  • the second search processing is immediately executed.
  • the present invention is not limited to this. Even when the first search processing ends, in a case where a predetermined condition (for example, "warning screen is being displayed", and the like) is fulfilled, the execution of the second search processing may be adapted to be exceptionally held (not executed).
  • the second search processing may be prevented from being executed.
  • a warning screen a screen that notifies of a warning
  • the second search processing may be prevented from being executed.
  • the second search processing may be adapted to be held.
  • the voice operation command group 670 corresponding to the warning screen 270 (here, only "GAIDANSU (guidance)") ( Fig. 20 ) is determined as the first command group M1, and the first search processing is executed. Subsequently, the second search processing may be prevented from being executed until an abnormal state as the cause of the warning is eliminated.
  • the second search processing in which a search range is the second command group M2 may be prevented from being executed at least until the warning is canceled.
  • the generation processing of generating the i-th text dictionary (step S23) is executed immediately before the i-th search processing (step S24).
  • the present invention is not limited to this.
  • the generation processing of generating the i-th text dictionary, or the like may be executed immediately after step S12 (steps S13, S14).
  • Fig. 46 shows that the generation processing of generating the i-th text dictionary is executed in step S13, and when it is not determined, in step S14, that the generation processing of generating the text dictionary should be ended, a value i is incremented, and the process then returns to step S13.
  • a plurality of text dictionaries may be generated before the search processing is started (step S24) (in more detail, immediately after the display change (immediately after step S11)).
  • a voice for operation is vocalized, and voice input is accepted.
  • a plurality of voice operation commands related to the plurality of operation screens are successively set as search targets, and search processing is executed in a plurality of stages.
  • a voice operation command group related to a screen that has been most recently called also referred to as "most recently called screen” is set as the first search target, and the first search processing is first executed.
  • a voice operation command group related to a screen serving as a caller that has called the most recently called screen is set as the second search target, and the second search processing is executed.
  • the operation screen displayed on the touch panel 45 in a case where the operation screen displayed on the touch panel 45 is switched from one screen (the first screen) to the other screen (the second screen) (in a case where the other screen is displayed on the touch panel 45 "as an alternative to" the one screen), in a state after display switching, a voice for operation is vocalized, and voice input is accepted.
  • a voice for operation is vocalized in a state in which between the two operation screens successively displayed, the one screen (screen before switching) is not displayed, and the other screen (screen after switching) is displayed.
  • two voice operation command groups related to these two screens are successively set as search targets, and search processing is successively executed in two stages.
  • a voice operation command group related to the other screen (the screen that has been most recently called) is set as the first search target, and the first search processing is first executed.
  • a voice operation command group related to the screen (the one screen) serving as a caller that has called the other screen is set as the second search target, and the second search processing is executed.
  • the second embodiment such a mode will be described focusing on points of difference from the first embodiment.
  • a plurality of operation screens having respective display ranges that differ from one another are displayed.
  • function buttons the number of which is 24 in total, including seven function buttons 521 to 527 (not illustrated) related to copy basic setting, and 17 function buttons 531 to 547 (refer to Figs. 22 and 23 ) related to copy practical setting, eight (or nine) function buttons are displayed on the touch panel 45 at each point of time.
  • 24 function buttons are classified into five function groups ("basic setting”, “original document”, “layout”, “tailoring”, “image quality/density”), and are arranged on a function group basis.
  • buttons 531 to 538 are displayed on the touch panel 45 as shown in Fig. 22 .
  • four function buttons 531 to 534 belonging to the "original document” group, and four function buttons 535 to 538 belonging to the "layout” group are displayed.
  • the operation screen 512 of the Fig. 22 changes to an operation screen 513 of Fig. 23 .
  • different eight function buttons 539 to 546 are displayed on the touch panel 45 as an alternative to the eight function buttons 531 to 538.
  • Two function buttons 539 and 540 belonging to the "layout” group, four function buttons 541 to 544 belonging to the "tailoring” group, and two function buttons 545 and 546 belonging to the "image quality/density” group are displayed.
  • 24 icons are arranged in a line in the horizontal direction in an icon display area 580 in the middle of the screen.
  • the 24 icons are icons corresponding to the 24 function buttons described above.
  • Function buttons that are currently displayed in the function button display area 570 in the upper part of the screen are indicated by a relative position of a frame 563 with respect to this icon column.
  • icons corresponding to function buttons that are currently displayed in the function button display area 580 in the upper part of the screen are indicated by being surrounded by the frame 563 having a rectangular shape.
  • a voice operation command group 830 (refer to Fig. 25 ) corresponding to the function buttons 539 to 546 that are currently displayed is determined as the first command group M1 (the first search range).
  • a voice operation command group 820 (refer to Fig. 24 ) corresponding to the function buttons 531 to 548 that are display immediately before the change is determined as the second command group M2 (the second search range). Subsequently, two-stage search processing similar to that of the first embodiment is executed.
  • Fig. 21 is a flowchart illustrating operation according to the second embodiment. As understood from a comparison between Fig. 21 and Fig. 4 , the operation in step S12 mainly differs. The operation will be described below focusing on points of difference.
  • step S12 (S12b) in a case where the operation screen 512 of Fig. 22 changes to the operation screen 513 of Fig. 23 , in step S12 (S12b) according to the second embodiment, not only the command group 820 that has already been obtained until the change (the voice operation command group related to the operation screen 512), but also another command group 830 (the voice operation command group related to the operation screen 513) is obtained (refer to Figs. 24 and 25 ). Consequently, the command group 800 (801) that includes both of the voice operation command groups 820 and 830 is formed. Incidentally, as shown in Fig.
  • an existing position (X-coordinate range in the virtual whole screen over the whole scroll range in the map-type display mode) of each operation key is prescribed as a field value of a field "X-coordinate range”.
  • "copy map display” is given as a field value of the field "screen” of each data record (illustration is omitted in Figs. 24 and 25 ).
  • the voice operation command group 830 related to the operation screen 513 is determined as the first command group M1 (step S23), and search processing (the first search processing) in which a search range is the first command group M1 is executed (step S24).
  • the voice operation command group 801 (refer to Fig. 25 ), data records (data records corresponding to the operation screen 513), each of which prescribes an X-coordinate value within a current display range ("1545 to 2344") in the map-type display mode as a field value of the field "X-coordinate range", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 25 , the voice operation command group 830 is extracted as the first command group M1. In other words, the voice operation command group 830 related to the operation screen 513 is set as the first search range.
  • step S26 it is determined that further search processing should be executed.
  • search processing is executed up to search processing related to the immediately preceding display screen (for example, 512).
  • the voice operation command group 820 related to the operation screen 512 is determined as the second command group M2 (step S23), and search processing (the second search processing) in which a search range is the second command group M2 is executed (step S24).
  • the voice operation command group 801 (refer to Fig. 24 ), data records (data records corresponding to the operation screen 512), each of which prescribes an X-coordinate value within an immediately preceding display range ("745 to 1544") in the map-type display mode as a field value of the field "X-coordinate range", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 24 , the voice operation command group 820 is extracted as the second command group M2. In other words, the voice operation command group 820 related to the operation screen 512 is set as the second search range. Subsequently, the second search processing is executed.
  • the second search processing in which a search range is the voice operation command group related to the operation screen 512 serving as a caller is executed.
  • This enables even a voice operation command that agrees with any of voice operation command groups related to the operation screen (the operation screen 512 serving as a caller) other than the operation screen 513 that has been most recently called to be searched for. Therefore, one voice operation command corresponding to a user's voice for operation can be properly detected from among the plurality of voice operation commands related to the plurality of operation screens.
  • the search processing related to the two operation screens 513 and 512 is performed in two stages, efficient search processing can be performed.
  • function buttons are scrolled in units of eight buttons in response to pressing of the scroll button 562 (561) in response to pressing of the scroll button 562 (561) in response to pressing of the scroll button 562 (561) in response to pressing of the scroll button 562 (561) in response to pressing of the scroll button 562 (561).
  • the first command group M1 and the second command group M2 have only to be set in a state in which operation commands are partially overlapped.
  • the second command group M2 after update may be set with the overlapped part between the first command group M1 and the second command group M2 excluded from the second command group M2.
  • the mode in which the present invention is applied in a case where one operation screen changes to another operation screen according to the scroll operation in the "map-type display mode" has been described.
  • the present invention is not limited to this.
  • the present invention may be applied in a case where one operation screen changes to another operation screen according to tab switching operation in the "tab-type display mode". Such a modified example will be described below.
  • a plurality of operation screens that differ from one another are selectively displayed according to the switching operation using a tab.
  • a plurality of function buttons here, 24 function buttons
  • the 24 function buttons include seven function buttons related to copy basic setting 421 to 427, and 17 function buttons 431 to 447 related to copy practical setting (refer to Figs. 26 and 27 ).
  • a function button group belonging to the selected one group is displayed in a function button display area 470 (refer to Fig. 26 ) in the touch panel 45.
  • a plurality of tabs 451 to 455 that correspond to the plurality of groups respectively are provided in the tab specification area 460.
  • an original document tab 452 in the tab specification area 460 is selected, and four function buttons 431 to 434 corresponding to the original document tab 452 is displayed in the function button display area 470.
  • a voice operation command group 880 (refer to Fig. 29 ) corresponding to the function buttons (function buttons in the operation screen 413 that is currently displayed) 435 to 440 displayed as a current display target is determined as the first command group M1 (the first search range).
  • a voice operation command group 870 (refer to Fig. 28 ) corresponding to the function buttons (function buttons in the original operation screen 412) 431 to 434 displayed as a display target immediately before the change is determined as the second command group M2 (the second search range).
  • step S12 (S12b) in a case where the operation screen 412 of Fig. 26 changes to the operation screen 413 of Fig. 27 , in step S12 (S12b) according to the second embodiment, not only the command group 870 that has already been obtained until the change (the voice operation command group related to the operation screen 412), but also another command group 880 (the voice operation command group related to the operation screen 413) is obtained (refer to Figs. 28 and 29 ). Consequently, the command group 800 (802) that includes both of the voice operation command groups 870 and 880 is formed.
  • the voice operation command group 880 is determined as the first command group M1 (step S23), and search processing (the first search processing) in which a search range is the first command group M1 is executed (step S24).
  • the voice operation command group 802 (refer to Fig. 29 ), data records (data records corresponding to (the layout group screen 413 ( Fig. 27 ))), each of which prescribes "layout” as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 29 , the voice operation command group 880 is extracted as the first command group M1. In other words, the voice operation command group 880 related to the layout group screen 413 is set as the first search range.
  • step S26 it is determined that further search processing should be executed.
  • the process then returns to step S23, and this time, the voice operation command group 870 is determined as the second command group M2. Subsequently, search processing (the second search processing) in which a search range is the second command group M2 is executed (step S24).
  • the voice operation command group 802 (refer to Fig. 28 ), data records (data records corresponding to the original document group screen 412), each of which prescribes "original document” as a field value of the field "group in screen", are extracted (by being narrowed down) from among the plurality of data records. Consequently, as shown in Fig. 28 , the voice operation command group 870 is extracted as the second command group M2. In other words, the voice operation command group 870 related to the original document group screen 412 is set as the second search range. Subsequently, the second search processing is executed.
  • step S23 the generation processing of the i-th text dictionary
  • step S24 the present invention is not limited to this.
  • the generation processing of the i-th text dictionary may be executed immediately after step S12 (steps S13, S14, and the like (refer to Fig. 46 )).
  • a voice for operation is vocalized in a state in which a first screen between two operation screens that are successively displayed on the touch panel 45 is displayed, and a second screen is not yet displayed.
  • a voice for operation is vocalized in a state in which one screen is currently displayed on the touch panel 45, and in a state in which there is a possibility that another screen will be displayed on the touch panel 45.
  • two-stage search processing is executed.
  • a voice operation command group related to the one screen (a screen that has been most recently called among screens that are being displayed (in detail, the first screen that is the screen serving as a caller of the second screen, and that is being displayed when the voice for operation is vocalized)) is set as the first search target, and the first search processing is first executed.
  • a voice operation command group related to the other screen that has a possibility of being called from the one screen (the second screen that is not yet displayed when the voice for operation is vocalized) is set as the second search target, and the second search processing is executed.
  • the voice operation command group related to the other screen is obtained beforehand before the other screen is displayed.
  • the first command group M1 is a command group related to one screen between the two operation screens that include the first screen related to the MFP 10, and the second screen displayed according to user's operation performed in the first screen (in detail, a screen that has been most recently called among screens that are being displayed).
  • the first to third embodiments share the same feature.
  • the voice operation command group related to the second screen is set as the first command group M1
  • the voice operation command group related to the first screen is set as the first command group M1.
  • such a mode will be described focusing on points of difference from the first and second embodiments.
  • the voice operation command group 820 related to the operation screen 512 (refer to Figs. 24 and 25 ) is set as the first command group M1, and the first search processing is executed.
  • the MFP 10 reads ahead (reads beforehand in advance) the voice operation command group (830) of a screen having a possibility of being changed from the operation screen 512 (an undisplayed screen having a possibility of becoming a screen called from the operation screen 512) from the storage part 5. Subsequently, the voice operation command group that has been read ahead is set as the second command group M2, and the second search processing is executed.
  • read processing of reading the voice operation command group related to the called screen is performed before the first search processing.
  • the read processing of reading the voice operation command group may be performed in parallel with the first search processing, or may be performed after the completion of the first search processing.
  • Fig. 30 is a flowchart illustrating operation according to the third embodiment. As understood from a comparison between Fig. 30 and Fig. 4 (and Fig. 21 ), the operation in step S12 mainly differs. The operation will be described below focusing on points of difference.
  • step S12 the voice operation command group (text dictionary) 820 related to the operation screen 512 that is currently being displayed (that is being displayed when the voice for operation is vocalized) is obtained, and a voice operation command group related to a screen having a possibility of being displayed next to the operation screen 512 (a screen that is not being displayed when the voice for operation is vocalized) is obtained.
  • the operation screen 513 that is displayed in response to pressing of the scroll button 562 (refer to Fig. 23 ) is presented.
  • the voice operation command group 830 related to the operation screen 513 is also obtained.
  • the voice operation command group 820 related to the operation screen 512 that is currently being displayed is determined as the first command group M1 (step S23), and search processing (the first search processing) in which a search range is the first command group M1 is executed (step S24).
  • step S26 it is determined that further search processing should be executed.
  • the process then returns to step S23.
  • search processing is executed up to search processing related to the immediately succeeding display screen (for example, 513).
  • step S23 the voice operation command group 830 related to the operation screen 513, which has a possibility of being displayed next to the operation screen 512 but is not yet displayed, is determined as the second command group M2. Subsequently, search processing (the second search processing) in which a search range is the second command group M2 is executed (step S24).
  • the search processing related to the two operation screens 512 and 513 is performed in two stages, efficient search processing can be performed.
  • the first search processing in which a search range is the voice operation command group 820 related to the most recently called operation screen 512 that is currently being displayed is first performed, and in a case where the first search processing does not succeed, the second search processing in which a search range is the voice operation command group 830 related to the other screen 513 is performed.
  • search processing in which a search range is the voice operation command group 820 having a relatively high possibility of being vocalized as a voice for operation, between the two voice operation command groups 820 and 830, is performed earlier, and subsequently, search processing in which a search range is the other voice operation command group 830 is performed. Therefore, efficient search processing can be performed.
  • the operation screen 513 (refer to Fig. 23 ) is presented here as a screen (at least one screen) having a possibility of being displayed next to the operation screen 512
  • the screen having a possibility of being displayed next to the operation screen 512 may be, for example, the operation screen 511 (refer to Fig. 31 ) that is displayed in response to pressing of the scroll button 561 ( Fig. 22 ) in the operation screen 512.
  • screens each having a possibility of being displayed next to the operation screen 512 may be both of the operation screen 511 and the operation screen 513.
  • the concept according to the third embodiment is applied to the map-type display mode (refer to Figs. 22 , 23 , and the like).
  • the present invention is not limited to the above.
  • a voice for operation may be accepted in a state in which the operation screen 412 of Fig. 26 is displayed, and the voice operation command group 870 (refer to Fig. 28 ) related to the operation screen 412 may be set as the first command group M1 so as to execute the first search processing.
  • a voice operation command group related to at least one screen among a plurality of screens called from the operation screen 412 may be set as the second command group M2 so as to execute the second search processing.
  • the voice operation command group 880 (refer to Fig. 29 ) related to the operation screen 413 (refer to Fig. 27 ) having a possibility of being displayed next to the operation screen 412 may be set as the second command group M2 so as to execute the second search processing.
  • a voice for operation may be vocalized in a state in which only the main screen 210 is displayed ( Fig. 5 ), and the voice operation command group 610 (refer to Fig. 8 ) related to the main screen 210 may be set as the first command group M1 so as to execute the first search processing.
  • a voice operation command group related to at least one screen among a plurality of screens called from the operation screen 210 may be set as the second command group M2 so as to execute the second search processing.
  • the sub-screen 250 (refer to Fig. 12 ) having a numeric keypad may be identified as a screen having a possibility of being displayed next to the operation screen 210
  • the voice operation command group 650 (refer to Fig.
  • the sub-screen 230 (refer to Fig. 6 ) may be identified as a screen having a possibility of being displayed next to the operation screen 210, and the voice operation command group 630 (refer to Fig. 9 ) related to the operation screen 230 may be set as the second command group M2 so as to execute the second search processing.
  • the voice operation command group 710 (refer to Fig. 17 ) related to the main screen 310 may be set as the first command group M1 so as to execute the first search processing.
  • a voice operation command group related to at least one screen among a plurality of screens called from the operation screen 310 may be set as the second command group M2 so as to execute the second search processing.
  • the sub-screen 330 (refer to Fig. 16 ) having a pull-down menu may be identified as a screen having a possibility of being displayed next to the operation screen 310, and the voice operation command group 730 (refer to Fig.
  • the second command group M2 includes two voice operation commands ("GASHITSU (image quality)” and "MOJI (character)”) corresponding to two options (two display elements), 321 ("image"), 322 (“character”), in the pull-down list respectively.
  • GASHITSU image quality
  • MOJI character
  • setting of whether or not to execute the operation of the third embodiment can be changed.
  • setting of "whether or not to execute search processing (the second search processing) in which a search range is a command group (the second command group M2) related to a screen that is not yet displayed when the voice for operation is vocalized" can be changed (in particular, on a user basis).
  • setting has only to be changed according to user's setting operation using a predetermined setting screen (not illustrated).
  • whether or not to execute search processing related to the second command group M2 has only to be determined on the basis of setting contents pertaining to the setting change.
  • the second search processing in which a search range is the second command group has only to be executed. It should be noted that in a case where setting is not to execute the second search processing in which a search range is the second command group, the second search processing in which a search range is the second command group is not executed, and search processing has only to be executed up to the first search processing in which a search range is the first command group.
  • step S23 the generation processing of the i-th text dictionary
  • step S24 the generation processing of the i-th text dictionary
  • the present invention is not limited to this.
  • the generation processing of the i-th text dictionary may be executed immediately after step S12 (steps S13, S14, and the like (refer to Fig. 46 )).
  • the MFP 10 executes the first search processing (and/or the second search processing) together with exclusion processing of excluding, from the first command group M1 (and/or the second command group M2), an operation command that is determined to be non-executable on the basis of a job execution state of the MFP 10.
  • the exclusion processing has only to be executed in step S23, S25, or the like.
  • Fig. 32 is a diagram illustrating such exclusion processing.
  • the upper part of Fig. 32 shows a part of the first command group M1 before the execution of the exclusion processing, and the lower stage of Fig. 32 shows a part of the first command group M1 after the execution of the exclusion processing.
  • a plurality of voice operation commands corresponding to a plurality of hardware keys including the start key (start button) 41, the stop key (stop button) 42, the reset key (reset button) 43, and the home key (home button) 44 are set as a part of the first command group M1.
  • a state in which each voice operation command corresponding to each operation key is executable is indicated in a field "job state”.
  • the voice operation commands "RISETTO (reset)” and “HOMU (home)” are executable in “all states" of the MFP 10
  • the voice operation command “SUTATO (start)” is executable in “job acceptable state”
  • the voice operation command “SUTOPPU (stop)” is executable in "job executing state”.
  • a "job executing" state is not a "job acceptable (state)".
  • the voice operation command (“SUTATO (start)" that prescribes a field value "job acceptable (state)” in the field "job state” is excluded from the first command group M1 on the basis of the data table of Fig. 32 .
  • the voice operation command "SUTATO (start)” is determined to be non-executable, and therefore the voice operation command "SUTATO (start)” is excluded from the first command group M1.
  • the first command group M1 after excluding the voice operation command "SUTATO (start)" is shown.
  • exclusion processing such as that shown in Fig. 33 may be performed.
  • the upper part of Fig. 33 shows a part of the first command group M1 before the execution of the exclusion processing, and the lower stage of Fig. 33 shows a part of the first command group M1 after the execution of the exclusion processing.
  • the voice operation command "SUTOPPU (stop)” is determined to be non-executable, and therefore the voice operation command "SUTOPPU (stop)” is excluded from the first command group M1.
  • the first command group M1 after excluding the voice operation command "SUTOPPU (stop)" is shown.
  • the operation command that is determined to be non-executable on the basis of a job execution state in the MFP 10 is excluded from the first command group M1 here, the present invention is not limited to this.
  • the operation command that is determined to be non-executable on the basis of a job execution state in the MFP 10 may be excluded from the second command group M2.
  • the operation command that is determined to be non-executable on the basis of a job execution state in the MFP 10 may be excluded from both of the first command group M1 and the second command group M2.
  • the first search processing (and/or the second search processing) may be executed together with exclusion processing of excluding, from the first command group M1 (and/or second command group M2), an operation command that is determined to be non-executable on the basis of a user authentication state in the MFP 10.
  • Fig. 34 is a diagram illustrating such exclusion processing.
  • the upper part of Fig. 34 shows a part of the first command group M1 before the execution of the exclusion processing, and the lower stage of Fig. 34 shows a part of the first command group M1 after the execution of the exclusion processing.
  • a voice operation command that can be used only after user authentication cannot be accepted.
  • a voice operation command (“YUZA BOKKUSU (user box)" that prescribes a field value "usable only after user authentication” in a field "user authentication state” is excluded from the first command group M1 on the basis of the data table of Fig. 34 .
  • the voice operation command "YUZA BOKKUSU (user box)" is determined to be non-executable, and therefore the voice operation command "YUZA BOKKUSU (user box)" is excluded from the first command group M1.
  • the first command group M1 after excluding the voice operation command "YUZA BOKKUSU (user box)" is shown.
  • the operation command that is determined to be non-executable on the basis of a user authentication state in the MFP 10 is excluded from the first command group M1 here, the present invention is not limited to this.
  • the operation command that is determined to be non-executable on the basis of a user authentication state in the MFP 10 may be excluded from the second command group M2.
  • the operation command that is determined to be non-executable on the basis of a user authentication state in the MFP 10 may be excluded from both of the first command group M1 and the second command group M2.
  • This fifth embodiment presents a mode in which even the search order in a search range in search processing of each stage (each of the first search processing and the second search processing) is adjusted on the basis of a predetermined criterion.
  • the fifth embodiment will be described below focusing on points of difference from the first embodiment or the like.
  • identity determination processing of determining identity between each of the two or more operation commands and a search target character string is successively executed. Subsequently, processing corresponding to one voice operation command that first agrees with the search target character string among the two or more operation commands is executed.
  • identity determination processing of determining identity between each of the two or more operation commands and a search target character string is successively executed. Subsequently, processing corresponding to one voice operation command that first agrees with the search target character string among the two or more operation commands is executed.
  • searching within a search range in search processing of each stage can be more efficiently executed, and responsiveness from the time at which a voice for operation is vocalized until the time at which processing corresponding to the voice for operation is executed can be enhanced.
  • the search order in search processing of each stage is determined on the basis of, for example, a display position of a corresponding operation key in a search target screen in the each stage.
  • the upper left side in a screen easily attracts a person's attention, and thus there is a tendency of important operation keys to be arranged on the upper left side.
  • important operation keys for example, operation keys, each of which is frequently used
  • a plurality of operation keys in a certain screen are often arranged on the comparative upper left side in the screen.
  • an evaluation value F1 (described next) related to a position of a corresponding operation key is calculated for each of a plurality of voice operation commands.
  • the evaluation value F1 is a distance (represented by a square root of the sum of the square of X and the square of Y) between coordinate values (X, Y) of a representative point (for example, an upper left point) of each operation key (refer to Fig. 35 ) and an upper left point (original point) in the screen.
  • a relatively high priority order is given to a voice operation command having a relatively low evaluation value F1 among the plurality of voice operation commands.
  • Fig. 35 shows each representative point (upper left point of each operation key (black small circle in the figure)) of each operation key in the detail setting screen 310 ( Fig. 16 ).
  • identity determination processing of determining identity with a search target character string is executed in succession from an operation key, the representative point of which exists on the relatively upper left side in the screen.
  • a relatively high priority order is given to an operation key, the representative point of which exists on the relatively upper left side.
  • identity determination processing of determining identity with a search target character string is executed in order of the operation keys 311, 312, 313, 314, 315, 316, 317, 318, and 319.
  • identity determination processing of determining identity between a search target character string and a voice operation command corresponding to the operation key 311 is executed.
  • identity determination processing of determining identity between the search target character string and a voice operation command corresponding to the operation key 312 is executed.
  • identity determination processing of determining identity between the search target character string and a voice operation command corresponding to each operation key 313, 314, ... is successively executed.
  • identity determination processing of determining identity with a search target character string is executed in order of the operation keys 321 and 322.
  • the first search processing search processing of the first stage in which a search target is the voice operation command group 730 ( Fig. 17 ) in the pull-down list 330 is first performed
  • the second search processing search processing of the second stage in which a search target is the voice operation command group 710 of the detail setting screen 310 is performed.
  • identity determination processing of determining identity between the search target character string and each of two or more voice operation commands in the voice operation command group related to the each stage is executed in the order described above.
  • the search order in search processing of each stage (the priority order in each search range) is determined on the basis of a display position of a corresponding operation key in a search target screen in each stage.
  • the present invention is not limited to this.
  • the search order in search processing of each stage may be determined on the basis of the priority order predetermined on the basis of contents of a specific field in the i-th text dictionary that prescribes the i-th voice operation command group.
  • an evaluation value F2 (F21) (also referred to as "priority coefficient") corresponding to each field value related to a field "job state" ("job executing", “job acceptable”, “all states”) is determined beforehand. For example, a relatively low evaluation value F2 (for example, “0.5”) is assigned to "all states", and a relatively high evaluation value F2 (for example, "1.0”, "0.9”) is assigned to other specific states ("job executing" and "job acceptable state").
  • a plurality of voice operation commands which are search targets in a certain stage, are rearranged (sorted) on the basis of the evaluation value F2 (in order of decreasing evaluation value F21 (decreasing order)) (refer to the lower part of Fig. 36 ).
  • Identity determination processing of determining identity between each of the plurality of voice operation commands and the search target character string is executed in the order after the sorting. Specifically, identity determination processing of determining identity between the voice operation command "SUTOPPU (stop)" having the highest priority coefficient "1.0" and the search target character string is executed with highest priority (first).
  • identity determination processing of determining identity between the voice operation command "SUTATO (start)” having the next highest priority coefficient "0.9” and the search target character string is executed next (second) by priority.
  • identity determination processing of determining identity between, for example, the voice operation command "RISETTO (reset)” having a priority coefficient "0.5" that is the highest next to the above and the search target character string is executed.
  • the evaluation value F2 (F22) (priority coefficient) according to each field value related to the field "user authentication state" ("usable only after user authentication", “usable even by public user”) may be determined beforehand.
  • a relatively high evaluation value F2 for example, "1.0” is assigned to "usable only after user authentication”
  • a relatively low evaluation value F2 for example, "0.5” is assigned to "usable even by public user”.
  • a plurality of voice operation commands which are search targets in a certain stage, are rearranged (sorted) on the basis of the evaluation value F2 (in decreasing order of the evaluation value F22) (refer to the lower part of Fig. 37 ).
  • Identity determination processing of determining identity between each of the plurality of voice operation commands and the search target character string is executed in the order after the sorting. Specifically, identity determination processing of determining identity between the voice operation command "YUZA BOKKUSU (user box)" having the highest priority coefficient "1.0" and the search target character string is executed with highest priority (first). Subsequently, identity determination processing of determining identity between the voice operation command "PABURIKKU BOKKUSU (public box)" having the next highest priority coefficient "0.5” and the search target character string is executed next (second) by priority. Subsequently, identity determination processing of determining identity between, for example, each of other voice operation commands and the search target character string is executed in order of decreasing priority coefficient of the each other voice operation command.
  • an evaluation value corresponding to a field value related to a single field may be employed as the evaluation value F2 (F23 and the like).
  • the evaluation value F2 F23 and the like.
  • the product of an evaluation value F21 corresponding to each field value related to the field "job state” and an evaluation value F22 corresponding to each field value related to the field "user authentication state” has only to be determined as the evaluation value F2 (F23).
  • a plurality of voice operation commands which are search targets in a certain stage, are rearranged (sorted) (in decreasing order of the evaluation value F23), and identity determination processing of determining identity between each of the plurality of voice operation commands and the search target character string may be successively executed in the order after the sorting.
  • search order in search processing of each stage may be determined on the basis of a state (execution state of a job) of the MFP 10.
  • the priority order (search order) of each voice operation command may be changed from the default order (the upper part of Fig. 38 ) to the order after sorting (the lower part of Fig. 38 ) on the basis of an execution state of a job (irrespective of the above-described evaluation value F2).
  • sorting is performed in such a manner that the highest priority order is given to the voice operation command "SUTOPPU (stop)" corresponding to the stop button (stop key).
  • identity determination processing of determining identity between each of the plurality of voice operation commands of each stage and a search target character string is successively executed, thereby search processing related to the voice operation command group of the each stage is executed.
  • search order in search processing of each stage may be determined on the basis of user authentication state in the MFP 10.
  • the priority order (search order) of each voice operation command may be changed from the default order (the upper part of Fig. 39 ) to the order after sorting (the lower part of Fig. 39 ) on the basis of a user authentication completed state (irrespective of the above-described evaluation value F2).
  • sorting is performed in such a manner that the highest priority order is given to the voice operation command "YUZA BOKKUSU (user box)" corresponding to the "user box” button (not illustrated).
  • identity determination processing of determining identity between each of the plurality of voice operation commands of each stage and a search target character string is successively executed, thereby search processing related to the voice operation command group of the each stage is executed.
  • identity determination processing of determining identity with a voice operation command that should be determined in early stage as the result of reflecting the user authentication state in the MFP 10 is executed by priority. Therefore, the search time can be shortened to enhance the responsiveness.
  • search order in search processing of each stage may be determined on the basis of a past use count (use history) of each voice operation command.
  • Fig. 40 is a diagram illustrating a use history table that stores a past use count of each voice operation command.
  • the use count of each voice operation command in the use history table (the upper part of Fig. 40 ) is updated to the use count of the each voice operation command after updating the use history (the lower part of Fig. 40 ) according to the use of the voice operation command "KARA (color)".
  • the use count of the voice operation command "KARA (color)" is updated from “10" times (the upper part) to "11" times (the lower part).
  • the use count of each voice operation command has only to be used as an evaluation value F2 (F24).
  • a plurality of voice operation commands which are search targets in a certain stage after updating the use history, are rearranged (sorted) (in decreasing order of the evaluation value F24) on the basis of the evaluation value F24 (the use count of each voice operation command).
  • identity determination processing of determining identity between each of the plurality of voice operation commands and the search target character string has only to be successively executed.
  • identity determination processing of determining identity with a voice operation command that should be determined in early stage as the result of reflecting the use history of each voice operation command is executed by priority. Therefore, the search time can be shortened to enhance the responsiveness.
  • evaluation value F2 (F24) in which only the use count of the voice operation command itself is reflected is used here, the present invention is not limited to this.
  • the evaluation value F2 (F25) in which only the use count of the operation key (212 and the like (refer to, for example, Fig. 5 )) corresponding to each voice operation command ("KARA (color)" and the like) is reflected may be used.
  • an evaluation value F2 (F26) in which both the use count of each voice operation command and the use count of an operation key corresponding to the each voice operation command are reflected may be used.
  • low priority commands voice operation commands each having a priority lower than a predetermined level
  • search processing of each stage the first search processing related to the first command group M1 and the like
  • a determination as to whether or not each voice operation command is a low priority command has only to be made on the basis of, for example, priority (evaluation value F2 (F27)) predetermined as shown in Fig. 41 .
  • priority evaluation value F2 (F27)
  • priorities 1.0, “0.6”, “0.4”, “0.3”, ...) are predetermined for voice operation commands ("SUTOPPU (stop)”, “PUROGURAMU (program)”, “KARA (color)”, “BAIRITSU (magnification ratio)", ...) respectively.
  • a voice operation command, the priority (F27) of which is a predetermined value TH1 (here, 0.3) or lower is determined to be a low priority command.
  • a voice operation command, the priority (F27) of which is higher than the predetermined value is determined not to be a low priority command.
  • a determination as to whether or not each voice operation command is a low priority command may be made on the basis of the above-described various evaluation values F2 or the like (refer to Fig. 35 to Fig. 41 ).
  • the print engine of the MFP 10 in a case where the print engine of the MFP 10 is being operated, it is determined that the MFP 10 is in a high load state.
  • the present invention is not limited to this.
  • a voice operation command group (i-th text dictionary) for each screen is obtained.
  • a voice operation command group (i-th text dictionary) for each screen may be obtained (generated) by executing, for example, character recognition processing (OCR processing) of recognizing characters included in an image of each operation screen.
  • OCR processing character recognition processing
  • a plurality of button images are extracted by image processing (in detail, button image extraction processing) for each screen, and character strings in the plurality of button images are recognized by character recognition processing of recognizing the character strings in the plurality of button images.
  • the recognized character strings are extracted as a voice operation command group for the button images, thereby generating a text dictionary.
  • coordinate values of a representative point for example, the central point of a button image
  • a button image corresponding to each voice operation command are assigned to the each voice operation command.
  • the text dictionary (i-th text dictionary) related to each screen may be generated in this manner. It should be noted that the generation processing of generating the i-th text dictionary has only to be executed in step S23 of Fig. 4 , step S13 of Fig. 46 , or the like.
  • the first screen is subjected to the button image extraction processing and the OCR processing.
  • the whole image 301 (images of all display areas) of the touch panel 45 is subjected to the button image extraction processing and the OCR processing. Consequently, a plurality of button images in the basic menu screen 210 are extracted, and character strings in the plurality of button images are recognized.
  • the first display state of displaying the first image 301 ( Fig. 42 ) that includes the first screen (the basic menu screen 210), and that does not include the second screen (refer to the numeric keypad screen 250 ( Figs. 6 and 43 ) the first image 301 is subjected to OCR processing or the like.
  • a character string in each button image is determined to be a voice operation command corresponding to the each button image, and a text dictionary such as that shown in Fig. 44 is generated.
  • the text dictionary is provided with the voice operation command group 610 in the basic menu screen 210.
  • Coordinate values of a button image corresponding to each voice operation command are assigned to the each voice operation command.
  • coordinate values of the central position P61 of the "original-document image quality" button (button image) 211 are assigned to the voice operation command "GENKO GASHITSU (original-document image quality)". The same applies to the other voice operation commands.
  • the second screen (for example, the numeric keypad screen 250 (refer to Figs. 6 and 43 )) is called from the first screen (for example, the basic menu screen 210 (refer to Figs. 5 and 42 )) according to user's operation for the first screen, and the numeric keypad screen 250 is displayed so as to be superimposed on the basic menu screen 210.
  • the touch panel 45 changes to the second display state of displaying the second image 302 that includes the first screen, and that also includes the second screen.
  • the MFP 10 executes operation such as that described below.
  • the MFP 10 generates (obtains) a difference image 303 ( Fig. 43 ) between the whole image 302 ( Fig. 43 ) of the touch panel 45 after the change and the whole image 301 ( Fig. 42 ) of the touch panel 45 before the change.
  • the difference image 303 is obtained as an image having display contents of the second screen.
  • the difference image 303 (in other words, the second screen (for example, the numeric keypad screen 250)), which is a processing target, is subjected to the button image extraction processing and the OCR processing.
  • Fig. 43 shows a state in which the called numeric keypad screen 250 is extracted as a difference image (an area surrounded by a thick line), and only the difference image is subjected to the OCR processing.
  • a plurality of button images in the numeric keypad screen 250 are extracted, and character strings in the plurality of button images are recognized. Further, a character string in each button image is determined to be a voice operation command corresponding to the each button image, and a text dictionary that is a part surrounded by a thick line frame of Fig. 45 is additionally generated.
  • the text dictionary is provided with the voice operation command group 650 (refer to Fig. 45 ) in the numeric keypad screen 250. In this manner, the command group 650 related to the second screen 250 is identified on the basis of a processing result of character recognition processing for the difference image 303.
  • coordinate values of a button image corresponding to each voice operation command are assigned to the each voice operation command.
  • coordinate values of the central position of a "2" button (button image) in the numeric keypad screen 250 are assigned to a voice operation command "NI (2)". The same applies to the other voice operation commands.
  • search processing similar to that in each of the above-described embodiments is executed.
  • search processing in a plurality of stages are successively executed.
  • a search target character string voice recognition data
  • a voice operation command "BAIRITSU (magnification ratio)" assigned to the button image "magnification ratio” it is determined that a position corresponding to coordinate values (450, 400) of a representative point of the button image has been pressed.
  • processing (call processing of the magnification ratio setting screen 230) is executed.
  • the MFP 10 causes the numeric keypad screen 250 to be hidden, and displays the magnification ratio setting screen 230 so as to be superimposed on the basic menu screen 210 (refer to Fig. 6 ).
  • the voice operation command group related to each screen is obtained by OCR processing
  • the first command group M1 and the second command group M2 are identified on the basis of, for example, a processing result of the OCR, and each search processing (the first search processing and the second search processing) is executed. Therefore, it is not necessary to register a voice operation command group related to each screen beforehand, which enables the trouble of registration to be reduced.
  • the voice operation command group 650 related to the numeric keypad screen 250 is obtained not by OCR processing or the like for all parts of the whole image 302 ( Fig. 43 ) of the touch panel 45 after screen change, but by OCR processing or the like for the difference image 303 that is a part of the whole image 302. Therefore, duplicated recognition processing can be avoided, which enables to achieve efficiency in processing.
  • search processing of each stage is based on the assumption that a voice operation command that first agrees with a search target character string is determined to be a voice operation command desired by a user.
  • the present invention is not limited to the above.
  • all voice operation commands included in a text dictionary (i-th text dictionary) of each stage may be subjected to identity determination processing in succession. Consequently, in a case where two or more voice operation commands, among the all voice operation commands, each agree with the search target character string, the two or more voice operation commands can be extracted without omission.
  • a text dictionary i-th text dictionary
  • a detail setting screen 280 related to paper is displayed so as to be superimposed on the basic menu screen 210.
  • search operation two-stage search operation
  • four options voice operation commands
  • “EYON (A4)” corresponding to four respective operation keys 281 to 284 are detected.
  • balloon images are displayed by being associated with the four respective operation keys 281 to 284. Numbers (identifiers) used to identify one another are given to the plurality of balloon images respectively. Specifically, "1" is given to the operation key 281, "2” is given to the operation key 282, “3” is given to the operation key 283, and "4" is given to the operation key 284.
  • the MFP 10 When the user further vocalizes the number (for example, "SAN (3)") corresponding to a desired option from among these options, the MFP 10 recognizes the vocalized contents (voice recognition). Consequently, a user's selection (specification) of a desired option (for example, the operation key 283) from among the plurality of options corresponding to the two or more voice operation commands is accepted. The MFP 10 determines the one voice operation command on the basis of the specified option (accepted desired option), and executes processing corresponding to the one voice operation command.
  • a desired option for example, the operation key 283
  • a voice recognition result, a search processing result and the like each relating to a voice vocalized by a user, are not displayed.
  • the present invention is not limited to this.
  • the voice recognition result and the like related to the vocalized voice may be displayed.
  • contents such as those shown in Fig. 49 may be displayed as a (part of) processing result of the search processing.
  • a character string that has been recognized as a voice operation command (in other words, as the result of search processing, a character string that has been searched for (detected) as a character string that agrees with one voice operation command, among character strings included in voice recognition data) is shown.
  • the wording "The following word has been recognized as a command. 'BAIRITSU (magnification ratio)'" indicates that voice input "BAIRITSU (magnification ratio)" has been recognized as a voice operation command.
  • a character string that has not been recognized as a voice operation command (in other words, as the result of search processing, a character string that has agreed with none of the plurality of voice operation commands (a character string that has not been searched for (detected) by the search processing)) is shown.
  • the wording "The following word has not been recognized as a command. 'ETO (well)'” indicates that voice input "ETO (well)" has not been recognized as a voice operation command.
  • a display element an operation key and the like
  • the wording "A hit has been found in this area in the basic menu screen” is displayed in the central area of the touch panel 45, and a position of the operation key 215 corresponding to the one voice operation command is indicated with a void arrow.
  • a display mode such as that shown Fig. 50
  • voice operation by vocalizing a voice "BAIRITSU (magnification ratio)” a user is able to check that an instruction equivalent to an instruction by pressing the "magnification ratio” key 215 has been accepted by the MFP 10.
  • the MFP 10 identifies the operation key 545 corresponding to the one voice operation command "SHITAJI CYOSEI (surface preparation)". In addition, by pressing the scroll key 562 in the current display screen 512 once to make a screen change (by causing the operation screen 513 to be displayed), the MFP 10 also identifies the operation key 545 as being displayable.
  • the MFP 10 obtains operating procedures including the operation of pressing the scroll key 562, and the operation of pressing the operation key 545 that is displayed after the screen change caused by the operation of pressing the scroll key 562.
  • the operation of pressing the scroll key 562 in the operation screen 512 is operation of causing the operation screen 513 to be displayed. Therefore, the scroll key 562 is also expressed as an operation key used to perform operation of causing the operation screen 513 to be displayed.
  • the MFP 10 causes such operating procedures to be displayed on the touch panel 45 (as an animation) in a moving image mode.
  • the MFP 10 highlights the scroll button 562 for a predetermined time period (for example, one second), thereby indicating that the same operation as that at the time of selecting the scroll button 562 in the operation screen 512 is being performed (refer to Fig. 51 ).
  • the MFP 10 (spuriously) causes the same event as that at the time of pressing the scroll button 562 in the operation screen 512, in other words, an operation event of the scroll button 562 (an internal event indicating that a representative position of the scroll button 562 has been pressed) to occur.
  • the MFP 10 executes a screen change from the operation screen 512 ( Fig. 51 ) to the operation screen 513 (refer to Figs. 23 and 52 ). It should be noted that this screen change is preferably performed in a mode in which display contents gradually scroll and change during the change.
  • the MFP 10 highlights the function button 545 (displaying in a specific color, and/or blinking, and the like) for a predetermined time period (for example, one second) this time, thereby indicating that the same operation as that at the time of selecting the function button (surface preparation button) 545 in the operation screen 513 is being performed (refer to Fig. 52 ).
  • a predetermined time period for example, one second
  • the MFP 10 (spuriously) causes an operation event of the function button 545 (an internal event or the like indicating that a representative position of the function button 545 has been pressed) to occur.
  • the MFP 10 displays a detail display screen (not illustrated), which is displayed according to pressing of the function button 545, so as to be superimposed on the operation screen 513.
  • a result of search processing may be indicated by displaying a display image of Fig. 53 on the touch panel 45.
  • a large number of function buttons 539 to 547 including the function buttons 531 to 538 displayed in the operation screen 512 ( Fig. 22 ), and the function buttons 539 to 546 displayed in the operation screen 513 ( Fig. 23 ), are arranged in a line in the horizontal direction. It should be noted that in the display image of Fig. 53 , in order to display the large number of function buttons in a line, an area corresponding to the operation screen 512 is displayed by being scaled down in comparison with Fig. 22 .
  • a function button line in which the large number of function buttons are arranged in a line in the horizontal direction, the operation key 545 corresponding to a character string that has been searched for by search processing is clearly expressed as an operation target key by voice operation. Specifically, a balloon image that includes the wording "desired setting ('surface preparation') has been found at this position" is displayed while a position of the operation key 545 in the function button line is indicated.
  • the MFP 10 executes voice recognition processing, and obtains a processing result of the voice recognition processing from the MFP 10 itself.
  • the voice recognition processing is executed by a portable information terminal (or an external server) that cooperates with the MFP 10, and the MFP 10 may obtain a processing result of the voice recognition processing from the portable information terminal or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Facsimiles In General (AREA)
  • User Interface Of Digital Computer (AREA)
  • Accessory Devices And Overall Control Thereof (AREA)
  • Control Or Security For Electrophotography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Claims (15)

  1. Bildverarbeitungsvorrichtung, umfassend:
    eine Anzeigeeinrichtung;
    eine Erfassungseinrichtung, die Spracherkennungsdaten erhält, bei denen es sich um ein Spracherkennungsergebnis handelt, das sich auf eine Stimme bezieht, die in einem Zustand gesprochen wird, in dem mindestens ein Betriebsbildschirm (210, 230; 512, 513) in der Anzeigeeinrichtung angezeigt wird;
    eine Bestimmungseinrichtung, die auf Grundlage der Spracherkennungsdaten eine Suchzielzeichenfolge bestimmt;
    eine Sucheinrichtung, die eine Suchverarbeitung zum Suchen eines Sprachbedienungsbefehls, der mit der Suchzielzeichenfolge übereinstimmt, unter einer Vielzahl von Sprachbedienungsbefehlen ausführt, einschließlich einer Sprachbedienungsbefehlsgruppe (630; 830), die sich auf einen ersten Bildschirm (210; 512) bezieht, der sich auf die Bildverarbeitungsvorrichtung bezieht, und einer Sprachbedienungsbefehlsgruppe (610; 820), die sich auf einen zweiten Bildschirm (230; 513) bezieht, der gemäß einer Benutzerbedienung für den ersten Bildschirm (210; 512) angezeigt wird; und
    eine Befehlsausführungseinrichtung, die eine Verarbeitung entsprechend dem einen Sprachbedienungsbefehl ausführt, der von der Sucheinrichtung gesucht wurde, wobei
    die Sucheinrichtung
    eine erste Suchverarbeitung unter der Vielzahl von Sprachbedienungsbefehlen ausführt, in der ein Suchbereich eine erste Befehlsgruppe (M1) ist, der eine erste Suchprioritätsreihenfolge gegeben wird,
    in einem Fall, in dem die Suchzielzeichenfolge nicht durch die erste Suchverarbeitung erkannt wird, in der der Suchbereich die erste Befehlsgruppe (M1) ist, eine zweite Suchverarbeitung unter der Vielzahl von Sprachbedienungsbefehlen ausführt, in der ein Suchbereich eine zweite Befehlsgruppe (M2) ist, der eine zweite Suchprioritätsreihenfolge gegeben wird,
    die erste Befehlsgruppe (M1) eine Sprachbedienungsbefehlsgruppe (630; 830) ist, die sich auf einen Bildschirm erster Priorität unter zwei Betriebsbildschirmen (210, 230; 512, 513) bezieht, die der erste Bildschirm (210; 512) und der zweite Bildschirm (230; 513) sind, wobei der Bildschirm erster Priorität ein Bildschirm ist, der angezeigt wird, wenn die Stimme gesprochen wurde, und der zuletzt aufgerufen wurde, bevor die Stimme gesprochen wurde, und
    die zweite Befehlsgruppe (M2) eine Sprachbedienungsbefehlsgruppe (610; 820) ist, die sich auf einen Bildschirm zweiter Priorität unter den zwei Betriebsbildschirmen (210, 230; 512, 513) bezieht, wobei sich der Bildschirm zweiter Priorität von dem Bildschirm erster Priorität unterscheidet.
  2. Bildverarbeitungsvorrichtung gemäß Anspruch 1, wobei
    der Bildschirm erster Priorität der zweite Bildschirm (230; 513) ist, der aufgrund der Benutzerbedienung für den ersten Bildschirm (210; 512) angezeigt wird, und der angezeigt wird, wenn die Stimme gesprochen wurde, und
    der Bildschirm zweiter Priorität der erste Bildschirm (210; 512) ist.
  3. Bildverarbeitungsvorrichtung gemäß Anspruch 2, wobei
    die beiden Betriebsbildschirme (210, 230; 512, 513) sich hinsichtlich ihrer Ebene voneinander unterscheiden, und
    die Erfassungseinrichtung das Spracherkennungsergebnis erhält, das sich auf die Stimme bezieht, die in einem Zustand gesprochen wird, in dem beide der zwei Betriebsbildschirme (210, 230; 512, 513) in der Anzeigeeinrichtung angezeigt werden.
  4. Bildverarbeitungsvorrichtung gemäß Anspruch 3, wobei
    die Erfassungseinrichtung das Spracherkennungsergebnis in Bezug auf die Stimme erhält, die in einem Zustand, in dem die zwei Betriebsbildschirme (210, 230; 512, 513) beide in der Anzeigeeinrichtung angezeigt werden, und in einem Zustand, in dem mindestens ein Teil des ersten Bildschirms (210; 512) durch den zweiten Bildschirm (230; 513) verdeckt ist, gesprochen wird, und
    die Sucheinrichtung die zweite Suchverarbeitung in Bezug auf die zweite Befehlsgruppe (M2) in einem Zustand ausführt, in dem ein Befehl, der einem durch den zweiten Bildschirm (230; 513) verdeckten Anzeigeelement entspricht, von der zweiten Befehlsgruppe (M2), die dem ersten Bildschirm (210; 512) entspricht, ausgeschlossen ist.
  5. Bildverarbeitungsvorrichtung gemäß Anspruch 3, wobei
    die Erfassungseinrichtung das Spracherkennungsergebnis in Bezug auf die Stimme erhält, die in einem Zustand, in dem die zwei Betriebsbildschirme (210, 230; 512, 513) beide in der Anzeigeeinrichtung angezeigt werden, und in einem Zustand, in dem zumindest ein Teil des ersten Bildschirms (210; 512) durch den zweiten Bildschirm (230; 513) verdeckt ist, gesprochen wird,
    die Bildverarbeitungsvorrichtung ferner eine Einstelleinrichtung umfasst, die, wenn die zweite Suchverarbeitung, die sich auf die zweite Befehlsgruppe (M2) bezieht, ausgeführt wird, einstellt, ob ein Befehl, der einem Anzeigeelement entspricht, das durch den zweiten Bildschirm (230; 513) verdeckt ist, aus der zweiten Befehlsgruppe (M2), die dem ersten Bildschirm (210; 512) entspricht, ausgeschlossen werden soll oder nicht, und
    die Sucheinrichtung die zweite Suchverarbeitung in Bezug auf die zweite Befehlsgruppe (M2) auf Grundlage von durch die Einstelleinrichtung eingestellten Einstellinhalte ausführt.
  6. Bildverarbeitungsvorrichtung gemäß Anspruch 2, wobei
    die Anzeigeeinrichtung nacheinander die beiden Betriebsbildschirme (210, 230; 512, 513) anzeigt, und
    die Erfassungseinrichtung das Spracherkennungsergebnis in Bezug auf die Stimme erhält, die in einem Zustand, in dem der erste Bildschirm (210; 512) unter den zwei Betriebsbildschirmen (210, 230; 512, 513) nicht in der Anzeigeeinrichtung angezeigt wird, und in einem Zustand, in dem der zweite Bildschirm (230; 513) in der Anzeigeeinrichtung angezeigt wird, gesprochen wird.
  7. Bildverarbeitungsvorrichtung gemäß Anspruch 1, wobei
    die Anzeigeeinrichtung nacheinander die beiden Betriebsbildschirme (512, 513) anzeigt,
    die Erfassungseinrichtung das Spracherkennungsergebnis in Bezug auf die Stimme erhält, die in einem Zustand, in dem der erste Bildschirm (512) unter den zwei Betriebsbildschirmen (512, 513) in der Anzeigeeinrichtung angezeigt wird, und in einem Zustand, in dem der zweite Bildschirm (513) nicht in der Anzeigeeinrichtung angezeigt wird, gesprochen wird,
    der Bildschirm erster Priorität der erste Bildschirm (512) ist, der ein Bildschirm ist, der zum Aufrufen des zweiten Bildschirms (513) dient, und der angezeigt wird, wenn die Stimme gesprochen wird, und
    der Bildschirm zweiter Priorität der zweite Bildschirm (513) ist, der noch nicht angezeigt wird, wenn die Stimme gesprochen wird, und der gemäß der Benutzerbedienung für den ersten Bildschirm (512) angezeigt wird.
  8. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 7, wobei
    die Sucheinrichtung aus der ersten Befehlsgruppe (M1) und/oder der zweiten Befehlsgruppe (M2) einen Betriebsbefehl ausschließt, der auf Grundlage eines Auftragsausführungszustands der Bildverarbeitungsvorrichtung oder eines Benutzerauthentifizierungszustands in der Bildverarbeitungsvorrichtung als nicht ausführbar bestimmt wird, und anschließend die Suchverarbeitung ausführt.
  9. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 8, wobei
    die Sucheinrichtung nacheinander eine Identitätsbestimmungsverarbeitung zu einer Bestimmung einer Identität zwischen jedem der zwei oder mehr Betriebsbefehle und der Suchzielzeichenfolge gemäß der Prioritätsreihenfolge ausführt, die jedem der zwei oder mehr Betriebsbefehle zugeordnet ist, die in der ersten Befehlsgruppe (M1) enthalten sind, und einen Betriebsbefehl als den einen Sprachbedienungsbefehl identifiziert, der unter den zwei oder mehr Betriebsbefehlen zuerst mit der Suchzielzeichenfolge übereinstimmt, um die erste Suchverarbeitung in Bezug auf die erste Befehlsgruppe (M1) auszuführen, und
    die Befehlsausführungseinrichtung eine Verarbeitung ausführt, die dem einen Sprachbedienungsbefehl entspricht, der unter den zwei oder mehr Betriebsbefehlen zuerst mit der Suchzielzeichenfolge übereinstimmt, oder wobei
    die Sucheinrichtung nacheinander die Identitätsbestimmungsverarbeitung zur Bestimmung der Identität zwischen jedem der zwei oder mehr Betriebsbefehle und der Suchzielzeichenfolge gemäß der Prioritätsreihenfolge ausführt, die jedem der zwei oder mehr Betriebsbefehle zugeordnet ist, die in der zweiten Befehlsgruppe (M2) enthalten sind, und den Betriebsbefehl als den einen Sprachbedienungsbefehl identifiziert, der unter den zwei oder mehr Betriebsbefehlen zuerst mit der Suchzielzeichenfolge übereinstimmt, um die zweite Suchverarbeitung in Bezug auf die zweite Befehlsgruppe (M2) auszuführen, und
    die Befehlsausführungseinrichtung die Verarbeitung ausführt, die dem einen Sprachbedienungsbefehl entspricht, der zuerst mit der Suchzielzeichenfolge unter den zwei oder mehr Betriebsbefehlen übereinstimmt.
  10. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 9, ferner umfassend
    eine Speichereinrichtung, die ein Befehlsverzeichnis speichert, in dem die Sprachbedienungsbefehlsgruppe (630; 830), die sich auf den ersten Bildschirm (210; 512) bezieht, und die Sprachbedienungsbefehlsgruppe (610; 820), die sich auf den zweiten Bildschirm (230; 513) bezieht, im Voraus registriert sind, wobei
    die Sucheinrichtung die erste Befehlsgruppe (M1) auf Grundlage des Befehlsverzeichnisses erhält, um die erste Suchverarbeitung auszuführen, und die zweite Befehlsgruppe (M2) auf Grundlage des Befehlsverzeichnisses erhält, um die zweite Suchverarbeitung auszuführen.
  11. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 9, ferner umfassend
    eine Zeichenerkennungseinrichtung, die eine Zeichenerkennungsverarbeitung zum Erkennen von in einem Bild des Betriebsbildschirms (210, 230; 512, 513) enthaltenen Zeichen ausführt, wobei
    die Sucheinrichtung die erste Befehlsgruppe (M1) und die zweite Befehlsgruppe (M2) auf Grundlage eines Verarbeitungsergebnisses der Zeichenerkennungsverarbeitung in Bezug auf jeden des ersten Bildschirms (210; 512) und des zweiten Bildschirms (230; 513) identifiziert und die Suchverarbeitung ausführt.
  12. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 11, wobei
    die Anzeigeeinrichtung als Verarbeitungsergebnis der Suchverarbeitung eine Zeichenfolge anzeigt, die unter den in den Spracherkennungsdaten enthaltenen Zeichenfolgen als eine Zeichenfolge gesucht wurde, die mit dem einen Sprachbedienungsbefehl übereinstimmt.
  13. Bildverarbeitungsvorrichtung gemäß einem der Ansprüche 1 bis 12, wobei
    die Anzeigeeinrichtung als ein Verarbeitungsergebnis der Suchverarbeitung unter den in den Spracherkennungsdaten enthaltenen Zeichenfolgen eine Zeichenfolge anzeigt, die mit keinem der mehreren Sprachbedienungsbefehle übereinstimmt, und optional, wobei
    die Anzeigeeinrichtung eine im Bildschirm befindliche Position eines Anzeigeelements angibt, das dem einen durch die Suchverarbeitung erkannten Sprachbedienungsbefehl entspricht.
  14. Verfahren zur Steuerung einer Bildverarbeitungsvorrichtung, das Verfahren umfassend:
    a) Erhalten von Spracherkennungsdaten, bei denen es sich um ein Spracherkennungsergebnis handelt, das sich auf eine Stimme bezieht, die in einem Zustand gesprochen wird, in dem mindestens ein Betriebsbildschirm (210, 230; 512, 513) in einer Anzeigeeinrichtung der Bildverarbeitungsvorrichtung angezeigt wird, und Bestimmen einer Suchzielzeichenfolge auf Grundlage der Spracherkennungsdaten;
    b) Ausführen einer Suchverarbeitung des Suchens unter einer Vielzahl von Sprachbedienungsbefehlen, die eine Sprachbedienungsbefehlsgruppe (630; 830), die sich auf einen ersten Bildschirm (210; 512) bezieht, der sich auf die Bildverarbeitungsvorrichtung bezieht, und eine Sprachbedienungsbefehlsgruppe (610; 820), die sich auf einen zweiten Bildschirm (230; 513) bezieht, der gemäß der Benutzerbedienung für den ersten Bildschirm (210; 512) angezeigt wird, umfassen, nach einem Sprachbedienungsbefehl, der mit der Suchzielzeichenfolge übereinstimmt; und
    c) Ausführen der Verarbeitung entsprechend dem einen Sprachbedienungsbefehl, der in b) gesucht wurde, wobei
    b) umfasst:
    b-1) Ausführen einer ersten Suchverarbeitung, in der ein Suchbereich eine erste Befehlsgruppe (M1) ist, der eine erste Suchprioritätsreihenfolge zugeordnet ist, unter der Vielzahl von Sprachbedienungsbefehlen; und
    b-2) in einem Fall, in dem die Suchzielzeichenfolge durch die erste Suchverarbeitung, in der der Suchbereich die erste Befehlsgruppe (M1) ist, nicht gefunden wird, Ausführen einer zweiten Suchverarbeitung, in der ein Suchbereich eine zweite Befehlsgruppe (M2) ist, der eine zweite Suchprioritätsreihenfolge zugeordnet ist, unter der Vielzahl von Sprachbedienungsbefehlen,
    die erste Befehlsgruppe (M1) eine Sprachbedienungsbefehlsgruppe (630; 830) ist, die sich auf einen Bildschirm erster Priorität unter zwei Betriebsbildschirmen (210, 230; 512, 513) bezieht, die der erste Bildschirm (210; 512) und der zweite Bildschirm (230; 513) sind, wobei der Bildschirm erster Priorität ein Bildschirm ist, der angezeigt wird, wenn die Stimme gesprochen wurde, und der zuletzt aufgerufen wurde, bevor die Stimme gesprochen wurde, und
    die zweite Befehlsgruppe (M2) eine Sprachbedienungsbefehlsgruppe (610; 820) ist, die sich auf einen Bildschirm zweiter Priorität unter den zwei Betriebsbildschirmen (210, 230; 512, 513) bezieht, wobei der Bildschirm zweiter Priorität von dem Bildschirm erster Priorität verschieden ist.
  15. Ein Computerprogramm, umfassend Anweisungen, die, wenn das Programm von einem in eine Bildverarbeitungsvorrichtung eingebauten Computer ausgeführt wird, den Computer veranlassen, das Verfahren gemäß Anspruch 14 auszuführen.
EP19158045.5A 2018-02-19 2019-02-19 Bildverarbeitungsvorrichtung, verfahren zur steuerung der bildverarbeitungsvorrichtung und programm Active EP3528244B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2018027205A JP7003720B2 (ja) 2018-02-19 2018-02-19 画像処理装置、画像処理装置の制御方法、およびプログラム

Publications (2)

Publication Number Publication Date
EP3528244A1 EP3528244A1 (de) 2019-08-21
EP3528244B1 true EP3528244B1 (de) 2021-09-29

Family

ID=65494048

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19158045.5A Active EP3528244B1 (de) 2018-02-19 2019-02-19 Bildverarbeitungsvorrichtung, verfahren zur steuerung der bildverarbeitungsvorrichtung und programm

Country Status (4)

Country Link
US (2) US10567600B2 (de)
EP (1) EP3528244B1 (de)
JP (2) JP7003720B2 (de)
CN (1) CN110177185A (de)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7003720B2 (ja) 2018-02-19 2022-01-21 コニカミノルタ株式会社 画像処理装置、画像処理装置の制御方法、およびプログラム
KR20190136832A (ko) * 2018-05-31 2019-12-10 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. 음성 명령을 프린팅 서비스를 지원하는 텍스트 코드 블록들로 변환
JP7182945B2 (ja) * 2018-08-09 2022-12-05 キヤノン株式会社 画像形成システム、画像形成装置および画像形成装置の制御方法
JP2022036352A (ja) * 2018-12-27 2022-03-08 ソニーグループ株式会社 表示制御装置、及び表示制御方法
JP7275795B2 (ja) * 2019-04-15 2023-05-18 コニカミノルタ株式会社 操作受付装置、制御方法、画像形成システム、及び、プログラム
JP7430034B2 (ja) * 2019-04-26 2024-02-09 シャープ株式会社 画像形成装置、画像形成方法及びプログラム
JP7418076B2 (ja) * 2019-07-16 2024-01-19 キヤノン株式会社 情報処理システム、情報処理装置、情報処理方法
JP2021081505A (ja) * 2019-11-15 2021-05-27 コニカミノルタ株式会社 画像処理装置及び制御方法
JP2021091182A (ja) * 2019-12-12 2021-06-17 コニカミノルタ株式会社 画像処理装置及び制御方法
CN111968640A (zh) 2020-08-17 2020-11-20 北京小米松果电子有限公司 语音控制方法、装置、电子设备及存储介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08220943A (ja) * 1995-02-14 1996-08-30 Ricoh Co Ltd 画像形成装置
JPH0950291A (ja) * 1995-08-04 1997-02-18 Sony Corp 音声認識装置及びナビゲーシヨン装置
JPH11337364A (ja) * 1998-05-29 1999-12-10 Clarion Co Ltd ナビゲーションシステム及び方法並びにナビゲーション用ソフトウェアを記録した記録媒体
JP5554804B2 (ja) * 1998-06-23 2014-07-23 雅信 鯨田 風景中の物体等のユーザーによる発見又は認識を支援するシステム
JP2001051694A (ja) * 1999-08-10 2001-02-23 Fujitsu Ten Ltd 音声認識装置
JP2003263307A (ja) 2001-11-29 2003-09-19 Nippon Telegr & Teleph Corp <Ntt> ハイパーテキスト音声制御方法、その装置およびプログラム
JP3724461B2 (ja) * 2002-07-25 2005-12-07 株式会社デンソー 音声制御装置
JP2005257903A (ja) * 2004-03-10 2005-09-22 Canon Inc 画像形成装置および音声入力処理方法およびコンピュータが読み取り可能なプログラムを格納した記憶媒体およびプログラム
JP2006171305A (ja) * 2004-12-15 2006-06-29 Nissan Motor Co Ltd ナビゲーション装置およびナビゲーション装置における音声認識による情報の検索方法
JP2006181874A (ja) * 2004-12-27 2006-07-13 Fuji Xerox Co Ltd 画像形成装置及び画像処理方法
JP4453596B2 (ja) * 2005-04-06 2010-04-21 株式会社安川電機 ロボット制御方法およびロボット装置
JP4781186B2 (ja) * 2006-07-18 2011-09-28 キヤノン株式会社 ユーザインタフェース提示装置および方法
JP2009109587A (ja) * 2007-10-26 2009-05-21 Panasonic Electric Works Co Ltd 音声認識制御装置
JP2009230068A (ja) * 2008-03-25 2009-10-08 Denso Corp 音声認識装置及びナビゲーションシステム
KR101502003B1 (ko) 2008-07-08 2015-03-12 엘지전자 주식회사 이동 단말기 및 그 텍스트 입력 방법
JP2010049432A (ja) * 2008-08-20 2010-03-04 Konica Minolta Business Technologies Inc 表示画面制御装置およびその方法並びに情報処理装置
JP4811507B2 (ja) 2009-08-25 2011-11-09 コニカミノルタビジネステクノロジーズ株式会社 画像処理システム、画像処理装置及び情報処理装置
US8996386B2 (en) 2011-01-19 2015-03-31 Denso International America, Inc. Method and system for creating a voice recognition database for a mobile device using image processing and optical character recognition
JP5812758B2 (ja) * 2011-08-22 2015-11-17 キヤノン株式会社 情報処理装置及びその制御方法、並びにプログラム
KR101330671B1 (ko) 2012-09-28 2013-11-15 삼성전자주식회사 전자장치, 서버 및 그 제어방법
KR102019719B1 (ko) * 2013-01-17 2019-09-09 삼성전자 주식회사 영상처리장치 및 그 제어방법, 영상처리 시스템
US10235130B2 (en) 2014-11-06 2019-03-19 Microsoft Technology Licensing, Llc Intent driven command processing
JP7003720B2 (ja) 2018-02-19 2022-01-21 コニカミノルタ株式会社 画像処理装置、画像処理装置の制御方法、およびプログラム

Also Published As

Publication number Publication date
US20200220987A1 (en) 2020-07-09
US10911618B2 (en) 2021-02-02
JP7003720B2 (ja) 2022-01-21
US20190260884A1 (en) 2019-08-22
US10567600B2 (en) 2020-02-18
JP2022048149A (ja) 2022-03-25
CN110177185A (zh) 2019-08-27
JP2019144759A (ja) 2019-08-29
JP7367750B2 (ja) 2023-10-24
EP3528244A1 (de) 2019-08-21

Similar Documents

Publication Publication Date Title
EP3528244B1 (de) Bildverarbeitungsvorrichtung, verfahren zur steuerung der bildverarbeitungsvorrichtung und programm
US8531686B2 (en) Image processing apparatus displaying an overview screen of setting details of plural applications
JP5573765B2 (ja) 操作表示装置、スクロール表示制御方法およびスクロール表示制御プログラム
US20090046057A1 (en) Image forming apparatus, display processing apparatus, display processing method, and computer program product
JP5262321B2 (ja) 画像形成装置、表示処理装置、表示処理方法および表示処理プログラム
US7908563B2 (en) Display control system, image procesing apparatus, and display control method
US9088678B2 (en) Image processing device, non-transitory computer readable recording medium and operational event determining method
JP4894875B2 (ja) 情報処理装置、情報処理装置の制御方法、および情報処理装置の制御プログラム
US9843691B2 (en) Image display device, image display system, image display method, and computer-readable storage medium for computer program
US20100017731A1 (en) Computer-readable recording medium having driver program stored
JP2008047106A (ja) ユーザ・インターフェイスをカスタム化するシステムおよび方法
US20110161867A1 (en) Image processing apparatus, display control method therefor, and recording medium
US20200267268A1 (en) Image forming apparatus, display control method, and recording medium
US11836442B2 (en) Information processing apparatus, method, and storage medium for associating metadata with image data
JP2006189924A (ja) 画像表示プログラム及び画像表示装置
US11252289B2 (en) Image processing apparatus, information processing method, and storage medium
US11838462B2 (en) Information processing apparatus displays plurality of buttons on a screen, and enable or disable reorder function on a screen to automatically reorder the plurality of buttons, method, and non-transitory storage medium
US11372520B2 (en) Display input apparatus and image forming apparatus capable of moving plurality of icons from one page to another on display device and displaying moved icons thereon
US20210092245A1 (en) Information processing system, method for controlling the same, and storage medium
JP2012068817A (ja) 表示処理装置およびコンピュータプログラム
JP5707794B2 (ja) 表示処理装置およびコンピュータプログラム
JP7052842B2 (ja) 情報処理装置およびプログラム
US9483163B2 (en) Information display apparatus, information display method, and computer readable medium
JP6701397B2 (ja) 入力装置、入力装置の制御方法、及びプログラム
JP4960401B2 (ja) 画像表示プログラム及び画像表示装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200217

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210420

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019007897

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1434900

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211229

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211229

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210929

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1434900

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211230

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220129

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220131

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602019007897

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20220228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220228

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220219

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220228

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230510

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231212

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231228

Year of fee payment: 6

Ref country code: GB

Payment date: 20240108

Year of fee payment: 6