US11509781B2 - Information processing apparatus, and control method for information processing apparatus - Google Patents
Information processing apparatus, and control method for information processing apparatus Download PDFInfo
- Publication number
- US11509781B2 US11509781B2 US16/529,591 US201916529591A US11509781B2 US 11509781 B2 US11509781 B2 US 11509781B2 US 201916529591 A US201916529591 A US 201916529591A US 11509781 B2 US11509781 B2 US 11509781B2
- Authority
- US
- United States
- Prior art keywords
- user
- processing apparatus
- machine learning
- output
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00323—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a measuring, monitoring or signaling apparatus, e.g. for transmitting measured information to a central location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00885—Power supply means, e.g. arrangements for the control of power supply to the apparatus or components thereof
- H04N1/00888—Control thereof
- H04N1/00896—Control thereof using a low-power mode, e.g. standby
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00885—Power supply means, e.g. arrangements for the control of power supply to the apparatus or components thereof
- H04N1/00904—Arrangements for supplying power to different circuits or for supplying power at different levels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0094—Multifunctional device, i.e. a device capable of all of reading, reproducing, copying, facsimile transception, file transception
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- aspects of the disclosure generally relate to an information processing apparatus which estimates the presence of a user who uses the apparatus with use of, for example, a human presence sensor (a motion detector) to control the state of the apparatus, and a control method for the information processing apparatus.
- a human presence sensor a motion detector
- image processing apparatuses each of which is equipped with a human presence sensor, estimates the presence of a user who uses the apparatus with use of measured data obtained from the human presence sensor, and returns from power saving mode based on a result of the estimation.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2010-147725 is equipped with a human presence sensor of the electrostatic capacitance type and estimates the presence of a user based on an intensity detected by the sensor and a predetermined threshold value.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2017-135748 is equipped with an infrared array sensor serving as a human presence sensor and estimates the presence of a user based on a predetermined feature of a two-dimensional image representing a heat source present in a range of detection performed by the sensor.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2018-19361 is equipped with an ultrasonic sensor serving as a human presence sensor and estimates the presence of a user based on distance values of an object which reflects ultrasonic waves and a predetermined feature of a time series variation of the distance values.
- the result of estimation may become incorrect depending on the installation environment of each apparatus, and, for example, the presence or absence of a noise generation source and its location or a difference in walking route of a user who approaches the apparatus affects the result of estimation.
- the image processing apparatus may return from power saving mode although the image processing apparatus is not intended to be used, thus wastefully consuming electricity.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2010-147725 learns an actual apparatus operating environment with use of an intensity detected by the sensor and a history of the user operation and adjusts a threshold value which is to be used for estimation.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2017-135748 previously defines image patterns which can be measured with respect to respective changeable orientations of the infrared array sensor and adjusts an index of estimation for each image pattern.
- the apparatus discussed in Japanese Patent Application Laid-Open No. 2018-19361 previously determines rules of noise generation patterns of the ultrasonic sensor and adjusts an index of estimation based on a result of detection of each pattern.
- any of the above-mentioned estimation methods only previously determines an estimation rule and then only adjusts an index of estimation in the estimation rule. Therefore, it is desirable that a machine learning model which is able to perform learning in such a way as to optimize the estimation rule itself in conformity with the actual apparatus operating environment be applied.
- any of the above-mentioned estimation methods is not able to acquire training data in the actual apparatus operating environment and perform learning in such a way as to optimize the estimation rule itself with use of the acquired training data.
- any of the above-mentioned estimation methods is not able to generate a training data set in which time-series measured data obtained from the human presence sensor is associated with a label obtained at that time (the presence or absence of a user who uses the apparatus in the actual apparatus operating environment), during the process of the apparatus being operating.
- this issue is not an issue which is confined to image processing apparatuses but an issue which is also common to various types of information processing apparatuses.
- aspects of the present disclosure are generally directed to providing a contrivance for enabling automatically generating a training data set during the process of an apparatus being operating and enabling estimating the presence or absence of a user who uses an apparatus suited for each apparatus operating environment.
- a machine learning system includes a sensor configured to sense an object which is present in front of an information processing apparatus, a machine learning model configured to input time-series sensed values output from the sensor and to estimate whether a user who uses the image processing apparatus is present, a user interface configured to receive an operation performed by a user, and a learning unit configured to cause the machine learning model to learn with use of training data including the time-series sensed values output from the sensor and labels that are based on presence and absence of an operation performed by a user and received by the user interface.
- FIGS. 1A and 1B are diagrams illustrating a configuration of an image processing system and an appearance of an image processing apparatus, respectively, according to an exemplary embodiment.
- FIGS. 2A and 2B are hardware configuration diagrams of apparatuses constituting the image processing system according to the present exemplary embodiment.
- FIGS. 3A, 3B, and 3C are software configuration diagrams of the image processing system according to the present exemplary embodiment.
- FIGS. 4A, 4B, and 4C are diagrams illustrating an example of a measurement area for a human presence sensor and an example of measured data obtained by the human presence sensor.
- FIGS. 5A and 5B are diagrams illustrating an example of a behavior of movement of the user and an example of measured data obtained by the human presence sensor at that time.
- FIGS. 6A and 6B are diagrams illustrating an example of a behavior of movement of the user and an example of measured data obtained by the human presence sensor at that time.
- FIGS. 7A, 7B, and 7C are diagrams illustrating a machine learning model and a data structure of a training data set which is to be input to the machine learning model.
- FIG. 8 is a flowchart illustrating an example of learned model updating processing.
- FIG. 9 is a flowchart illustrating an example of user estimation processing.
- FIG. 10 is a flowchart illustrating an example of training data set generation processing.
- FIGS. 11A, 11B, and 11C are diagrams illustrating an example of a behavior of movement of the user and an example of a training data set generated at that time.
- FIGS. 12A, 12B, and 12C are diagrams illustrating an example of a behavior of movement of the user and an example of a training data set generated at that time.
- FIGS. 13A, 13B, and 13C are diagrams illustrating an example of a behavior of movement of the user and an example of a training data set generated at that time.
- FIG. 1A is an outline diagram illustrating an example of a configuration of an image processing system according to an exemplary embodiment.
- the image processing system 1 includes a learned model generation apparatus 10 and an image processing apparatus 100 , which are interconnected via a network 20 in such a way as to be able to communicate with each other.
- the learned model generation apparatus 10 is configured with a personal computer (PC) or a server which generates a learned model described below. Furthermore, the learned model generation apparatus 10 can be configured with a plurality of apparatuses or can be, for example, a cloud server. For example, the learned model generation apparatus 10 can be a configuration which is implemented with use of, for example, a cloud service.
- PC personal computer
- server which generates a learned model described below.
- the learned model generation apparatus 10 can be configured with a plurality of apparatuses or can be, for example, a cloud server.
- the learned model generation apparatus 10 can be a configuration which is implemented with use of, for example, a cloud service.
- the image processing apparatus 100 is a multifunction peripheral equipped with the functions of, for example, scanning and printing.
- the network 20 is configured with a local area network (LAN) or a wide area network (WAN), such as the Internet or an intranet.
- LAN local area network
- WAN wide area network
- the learned model generation apparatus 10 and the image processing apparatus 100 are connected directly to the network 20 or are connected to the network 20 via a connection device, such as a router, a gateway, or a proxy server (each not illustrated).
- a connection device such as a router, a gateway, or a proxy server (each not illustrated).
- the configuration of the network 20 , the connection device for each element, and the number of connection devices therefor are not limited to the above-mentioned ones, but only need to be configured to enable transmission and reception of data between the learned model generation apparatus 10 and the image processing apparatus 100 .
- the function of the learned model generation apparatus 10 can be configured to be provided in the image processing apparatus 100 .
- FIG. 1B is an outline diagram illustrating an example of an appearance of the image processing apparatus 100 .
- the image processing apparatus 100 includes a scanner 110 , a human presence sensor 120 , an operation display panel 130 , and a printer 140 .
- the scanner 110 is a general scanner, which can be included in a multifunction peripheral.
- the scanner 110 is capable of performing a first reading method, which, while fixing an image reading sensor at a predetermined position, sequentially conveys sheets of a document one by one and causes the image reading sensor to read an image of each sheet of the document, and a second reading method, which causes an image reading sensor to scan a document fixedly placed on a platen glass and read an image of the document.
- a document stacking tray 111 is a tray on which to stack sheets of a document, which are sequentially conveyed in the case of the first reading method. Furthermore, the document stacking tray 111 is equipped with a sensor which detects sheets of a document being stacked on the document stacking tray 111 .
- a document conveyance unit 112 conveys sheets of a document stacked on the document stacking tray 111 one by one in the case of the first reading method. Moreover, the document conveyance unit 112 is able to be pivotally moved up and down for opening and closing, and, in the case of the second reading method, an image of a document placed on a platen glass, which appears when the document conveyance unit 112 is pivotally moved up, is read. Furthermore, the document conveyance unit 112 is equipped with a sensor which detects the opening and closing states of the document conveyance unit 112 in upward and downward motions.
- the human presence sensor 120 is a sensor for detecting a user who uses the image processing apparatus 100 .
- the human presence sensor 120 is, for example, an infrared array sensor in which a plurality of infrared receiving elements, which receives infrared rays, is arranged in a matrix shape. Furthermore, the human presence sensor 120 only needs to be able to detect a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 , and can be of any type of detection method, such as an ultrasonic sensor, an electrostatic capacitance sensor, a pyroelectric sensor, or a red-green-blue (RGB) area sensor (camera).
- RGB red-green-blue
- the operation display panel 130 includes, for example, light-emitting diodes (LEDs), an operation button for switching power modes, and a liquid crystal touch display.
- the operation display panel 130 not only displays contents of an operation performed by the user and an internal state of the image processing apparatus 100 but also receives an operation performed by the user.
- the printer 140 is a general printer, which can be included in a multifunction peripheral.
- the printer 140 includes a cassette 143 , a cassette 144 , and a cassette 145 , each of which is formed in a drawer shape as a printing paper storage unit, and further includes a manual feed tray 142 , which is exposed to outside the image processing apparatus 100 .
- each of the cassettes 143 , 144 , and 145 is drawn out forward, sheets of printing paper are supplied to each cassette, and, then, each cassette is closed. Furthermore, each of the cassettes 143 , 144 , and 145 is equipped with a sensor which detects opening and closing of the cassette.
- the manual feed tray 142 is used to supply sheets of printing paper stacked thereon to the printer 140 , and is equipped with a sensor which detects sheets of printing paper being stacked on the manual feed tray 142 .
- the printer 140 further includes an image forming unit 141 and a sheet discharge unit 146 .
- the image forming unit 141 conveys the supplied sheet of printing paper and then forms an image on the sheet of printing paper.
- a cover which covers the front surface of the image forming unit 141 is able to be opened and closed in an anterior direction, so that the user is enabled to replace consumable parts needed for image formation or remove a jammed sheet.
- the image forming unit 141 is equipped with a sensor which detects the opening and closing states of the above-mentioned cover thereof.
- a sheet of printing paper with an image formed thereon by the image forming unit 141 is discharged onto the sheet discharge unit 146 .
- FIG. 2A is a diagram illustrating an example of a hardware configuration of the learned model generation apparatus 10 .
- the learned model generation apparatus 10 is configured with a general computer and includes, for example, a central processing unit (CPU) 11 , a read-only memory (ROM) 12 , a random access memory (RAM) 13 , a hard disk drive (HDD) 14 , and a network interface (I/F) 15 .
- CPU central processing unit
- ROM read-only memory
- RAM random access memory
- HDD hard disk drive
- I/F network interface
- the CPU 11 is an execution medium which executes programs incorporated in the learned model generation apparatus 10 .
- the ROM 12 is a non-volatile memory.
- the RAM 13 is a volatile memory.
- the HDD 14 is a storage medium such as a magnetic disk. For example, programs for performing flowcharts described below are stored in the ROM 12 or the HDD 14 , and such programs are loaded onto the RAM 13 when being executed.
- the RAM 13 operates as a work memory used for the programs to be executed by the CPU 11 .
- a learned model which is generated by the programs being executed is stored in the HDD 14 .
- the network I/F 15 takes charge of transmission and reception of data which are performed via the network 20 .
- FIG. 2B is a diagram illustrating an example of a hardware configuration of the image processing apparatus 100 .
- each solid line arrow denotes a signal line used to transmit or receive a control command or data.
- each dotted line arrow denotes a power line.
- the image processing apparatus 100 includes a main controller 150 , a user estimation unit 160 , a network I/F 170 , a scanner 110 , a printer 140 , an operation display panel 130 , and a power source management unit 180 .
- the main controller 150 includes a main CPU 151 , a main ROM 152 , a main RAM 153 , and an HDD 154 .
- the main CPU 151 is an execution medium which executes programs incorporated in the main controller 150 .
- the main ROM 152 is a non-volatile memory.
- the main RAM 153 is a volatile memory.
- the HDD 154 is a storage medium such as a magnetic disk. For example, programs for performing flowcharts described below are stored in the main ROM 152 or the HDD 154 , and such programs are loaded onto the main RAM 153 when being executed.
- the main RAM 153 operates as a work memory used for the programs to be executed by the main CPU 151 .
- the user estimation unit 160 includes a human presence sensor 120 , a sub-CPU 161 , a sub-ROM 162 , and a sub-RAM 163 .
- the sub-CPU 161 is an execution medium which executes programs incorporated in the user estimation unit 160 .
- the sub-ROM 162 is a non-volatile memory.
- the sub-RAM 163 is a volatile memory. For example, programs for performing flowcharts described below are stored in the sub-ROM 162 , and such programs are loaded onto the sub-RAM 163 when being executed.
- the sub-RAM 163 operates as a work memory used for the programs to be executed by the sub-CPU 161 .
- the network I/F 170 takes charge of transmission and reception of data which are performed via the network 20 .
- the power source management unit 180 controls supplying of power to each unit of the image processing apparatus 100 .
- the image processing apparatus 100 has at least two modes as power modes.
- the power modes include a standby mode, which is the state of being ready to perform ordinary operations of the image processing apparatus 100 , such as scanning and printing, and a sleep mode, which consumes lower amounts of power than the standby mode.
- the main controller 150 controls the power source management unit 180 to transition the power mode from the standby mode to the sleep mode.
- the sleep mode supplying of power to, for example, the scanner 110 and the printer 140 is stopped, and supplying of power to the main controller 150 , the operation display panel 130 , and the network I/F 170 is also stopped except for a part.
- the user estimation unit 160 is in a state of being able to operate.
- the main controller 150 In the sleep mode, in a case where it is estimated by the user estimation unit 160 that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present, electric power is supplied from the power source management unit 180 to each unit. With this control performed, the image processing apparatus 100 returns from the sleep mode to the standby mode. Moreover, the main controller 150 also performs control in such a way as to switch the power modes based on pressing of a button provided on the operation display panel 130 for switching the power modes.
- each of the above-mentioned hardware constituent elements for example, a CPU, a ROM, a RAM, and an HDD
- only one element is included in each of the learned model generation apparatus 10 illustrated in FIG. 2A and the main controller 150 and the user estimation unit 160 illustrated in FIG. 2B .
- a configuration in which each of the above-mentioned hardware constituent elements includes a plurality of elements can be employed, and the respective constituent elements and their connection configurations are not limited to the above-mentioned ones.
- another type of storage device such as a solid state drive (SSD) can be included.
- SSD solid state drive
- FIG. 3A is a block diagram illustrating a software configuration of a program 30 which is installed on the learned model generation apparatus 10 .
- the function of the program 30 is implemented by the CPU 11 of the learned model generation apparatus 10 loading programs stored in, for example, the ROM 12 or the HDD 14 onto the RAM 13 and executing the programs as needed.
- the program 30 includes a training data set receiving unit 31 , a learned model updating unit 32 , and a learned model transmitting unit 33 . Details thereof are described as follows.
- the training data set receiving unit 31 receives a training data set which is generated by a training data set generation unit 353 described below.
- the learned model updating unit 32 updates a learned model with use of the training data set received by the training data set receiving unit 31 . Details of the learned model based on a machine learning model are described below with reference to FIGS. 7A, 7B, and 7C .
- the learned model transmitting unit 33 transmits the learned model updated by the learned model updating unit 32 to the user estimation unit 160 .
- FIG. 3B is a block diagram illustrating a software configuration of a program 350 which is installed on the main controller 150 .
- the function of the program 350 is implemented by the main CPU 151 of the main controller 150 loading programs stored in, for example, the main ROM 152 or the HDD 154 onto the main RAM 153 and executing the programs as needed.
- the program 350 includes a device control unit 351 , a user operation detection unit 352 , a training data set generation unit 353 , a human presence sensor data receiving unit 354 , and a training data set transmitting unit 355 .
- the device control unit 351 issues control instructions to the scanner 110 and the printer 140 , and acquires pieces of status information which are obtained from the respective sensors included in the scanner 110 and the printer 140 .
- the user operation detection unit 352 detects a user operation received by the operation display panel 130 . For example, the user operation detection unit 352 detects pressing of the above-mentioned button for switching the power modes and a touch operation performed on the liquid crystal touch display. Moreover, the user operation detection unit 352 detects a user operation from the status information obtained via the device control unit 351 . For example, the user operation detection unit 352 detects sheets of a document having been stacked on the document stacking tray 111 of the scanner 110 or the document conveyance unit 112 of the scanner 110 having been opened and closed.
- the user operation detection unit 352 detects the cassette 143 , 144 , or 145 of the printer 140 having been opened and closed, sheets of printing paper having been stacked on the manual feed tray 142 of the printer 140 or the cover of the image forming unit 141 of the printer 140 having been opened and closed.
- the human presence sensor data receiving unit 354 receives human presence sensor data (time-series data sequentially acquired within a fixed time) which is acquired by a human presence sensor data acquisition unit 363 described below.
- the training data set generation unit 353 associates time at which the user operation detection unit 352 detected a user operation and time at which the human presence sensor data was acquired with each other, and generates a training data set by combining the associated user operation and human presence sensor data.
- the training data set which is generated at this time is described below with reference to FIGS. 11A, 11B, and 11C to FIGS. 13A, 13B, and 13C .
- the training data set transmitting unit 355 transmits the generated training data set to the learned model generation apparatus 10 .
- FIG. 3C is a block diagram illustrating a software configuration of a program 360 which is installed on the user estimation unit 160 .
- the function of the program 360 is implemented by the sub-CPU 161 of the user estimation unit 160 loading programs stored in, for example, the sub-ROM 162 onto the sub-RAM 163 and executing the programs as needed.
- the program 360 includes a learned model receiving unit 361 , a user presence or absence estimation unit 362 , a human presence sensor data acquisition unit 363 , an estimation result transmitting unit 364 , and a human presence sensor data transmitting unit 365 .
- the human presence sensor data acquisition unit 363 acquires measured data from the human presence sensor 120 at intervals of a predetermined time, and buffers a predetermined number of pieces of measured data in the sub-RAM 163 . Moreover, in response to a transmission request from the main controller 150 , the human presence sensor data transmitting unit 365 transmits the human presence sensor data buffered in the sub-RAM 163 to the main controller 150 . Furthermore, the human presence sensor data transmitting unit 365 can transmit the human presence sensor data buffered in the sub-RAM 163 to the main controller 150 as needed when the main controller 150 is in the standby mode.
- the user presence or absence estimation unit 362 estimates the presence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 , by inputting the measured data acquired by the human presence sensor data acquisition unit 363 to a learned model based on a machine learning model described below.
- the estimation result transmitting unit 364 communicates such an estimation result to the power source management unit 180 and the main controller 150 .
- the learned model receiving unit 361 receives the learned model updated by the learned model updating unit 32 of the learned model generation apparatus 10 and then transmitted by the learned model transmitting unit 33 . After that, the user presence or absence estimation unit 362 updates the learned model which is used by the user presence or absence estimation unit 362 with the learned model received by the learned model receiving unit 361 .
- the present exemplary embodiment is not limited to a configuration in which the function of the program 30 is provided on the outside of the image processing apparatus 100 , but a configuration in which the program 30 is installed on the image processing apparatus 100 can also be employed.
- a configuration in which the functions of the training data set receiving unit 31 , the learned model updating unit 32 , and the learned model transmitting unit 33 are implemented in the image processing apparatus 100 can also be employed.
- FIGS. 4A and 4B are schematic diagrams illustrating an example of a measurement area in which the human presence sensor 120 is able to perform measurement.
- FIG. 4A represents a positional relationship between the human presence sensor 120 , the user, and the measurement area when the image processing apparatus 100 is viewed from the lateral side thereof.
- FIG. 4B represents a positional relationship between the human presence sensor 120 , the user, and the measurement area when the image processing apparatus 100 is viewed from above.
- the human presence sensor 120 receives infrared rays radiated from the heat source of an object (for example, a human body) with, for example, every one of infrared receiving elements (infrared sensors) arranged in a lattice shape. Then, the human presence sensor 120 has a feature to identify the shape of the heat source (detection region) as a temperature distribution by using temperature values that are based on the quantities of infrared rays (light receiving results) received by the respective infrared receiving elements.
- FIG. 4A illustrates a measurement area in which the detection surface with the infrared receiving elements of the human presence sensor 120 arranged thereon is used to perform detection.
- FIG. 4B indicates that a space which radially extends from the detection surface with the infrared receiving elements of the human presence sensor 120 arranged thereon is the measurement area.
- FIG. 4C illustrates an example of measured data which can be acquired from the human presence sensor 120 in a case where the relationship illustrated in FIGS. 4A and 4B exists.
- the measured data output from the human presence sensor 120 is acquired as a two-dimensional image on the detection surface with infrared receiving elements arranged thereon.
- the value of each pixel of the two-dimensional image indicates the temperature which is measured from each pixel.
- FIGS. 5A and 5B are diagrams illustrating an example of a case where a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present.
- FIG. 5A illustrates a behavior in which the user comes close to the image processing apparatus 100 in front thereof and is then operating the operation display panel 130 , and also illustrates a trajectory of movement of the user during a period from time t 1 _ 0 to time t 1 _ 4 .
- FIG. 5B illustrates a behavior of changes of pieces of measured data which are acquired from the human presence sensor 120 at the respective times.
- FIGS. 6A and 6B are diagrams illustrating a case where a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is not present.
- FIG. 6A illustrates a behavior in which the user comes close to the image processing apparatus 100 , picks up a printed sheet of printing paper discharged to the sheet discharge unit 146 , and then moves away from the image processing apparatus 100 , and also illustrates a trajectory of movement of the user during a period from time t 2 _ 0 to time t 2 _ 5 .
- FIG. 6B illustrates a behavior of changes of pieces of measured data which are acquired from the human presence sensor 120 at the respective times.
- a feature of the temperature distribution of measured data which is measured by the human presence sensor 120 and a feature of the time-series change thereof such as those described above vary depending on an environment in which the image processing apparatus 100 is installed and is operating (hereinafter referred to as an “apparatus operating environment”). For example, a case where a heat source which may cause a noise onto an infrared array sensor is present near the image processing apparatus 100 can also be supposed. Moreover, depending on the installation location of the image processing apparatus 100 , a case where a trajectory of movement taken when the user approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 or the user passes by the image processing apparatus 100 differs can also be supposed.
- the presence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 may be erroneously estimated due to a difference in the apparatus operating environment of the image processing apparatus 100 .
- training a machine learning model which is described below with reference to FIGS. 7A, 7B, and 7C , (causing the machine learning model to perform learning) based on measured data which is acquired from the human presence sensor 120 when the image processing apparatus 100 is actually operating enables estimation that is adapted for each apparatus operating environment.
- FIG. 7A is a diagram illustrating an example of a machine learning model for estimating the presence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 .
- This machine learning model is configured as, for example, a recurrent neural network (RNN).
- RNN recurrent neural network
- an input Xt denotes measured data about the t-th frame acquired from the human presence sensor 120 .
- the input Xt is sequentially input to the model every time measured data about each frame is acquired.
- the input Xt is input to the RNN model through preprocessing.
- An example of the preprocessing includes processing for applying fast Fourier transform (FFT) to measured data and, after that, removing an unnecessary frequency component from the transformed measured data.
- FFT fast Fourier transform
- the RNN model is implemented as a binary classification model for classifying the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 from measured data acquired from the human presence sensor 120 with use of, for example, a long short-term memory (LSTM).
- the learning process is used to obtain a weight coefficient W_in to be applied to an input to an intermediate layer, a weight coefficient W_out to be applied to an output from the intermediate layer, and a weight coefficient W_rec to be applied when a past output is set as a next input.
- the number of intermediate layers and the number of units of each layer are not particularly limited.
- An output Yt is an output which is obtained from an input of measured data about the t-th frame.
- the output Yt is an estimation accuracy of two values (0: a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 not being present, 1: a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 being present) into which the above-mentioned binary classification model performs classification.
- Performing learning with use of measured data acquired from the human presence sensor 120 when the image processing apparatus 100 is operating via the above-mentioned machine learning model enables obtaining the weight coefficients W_in, W_out, and W_rec which are used to perform estimation adapted for each apparatus operating environment.
- FIGS. 7B and 7C are diagrams illustrating an example of a training data set which is used for a training process (a learning process) of the machine learning model illustrated in FIG. 7A .
- the training data set includes human presence sensor data, which serves as an input to the machine learning model, and a label, which indicates the presence of absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 , and is implemented as array data in which there are a predetermined number of pieces of data about times (frames) including successive pieces of human presence sensor data and labels.
- each value set forth in the column of “index” represents an index value of the array data.
- Each value set forth in the column of “human presence sensor data” indicates that, for example, “tn_0” is measured data acquired from the human presence sensor 120 at time tn_0.
- Each value set forth in the column of “label” represents a binary state obtained by classification performed by the above-mentioned binary classification model.
- a training data set 701 illustrated in FIG. 7B represents an example of a case where a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present.
- the training data set is generated in such a manner that the value of the label changes from “0” to “1” at timing (time tn_N-c) at which it is desirable that the image processing apparatus 100 recognize that a user who uses the image processing apparatus 100 is present.
- a training data set 702 illustrated in FIG. 7C represents an example of a case where a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is not present. Therefore, in this example, the training data set is generated in such a manner that the values of the labels at all of the times are “0”.
- machine learning model is not limited to the example illustrated in FIGS. 7A to 7C , but can be of any form.
- FIG. 8 is a flowchart illustrating an example of learned model updating processing, which is a part of the program 30 incorporated in the learned model generation apparatus 10 .
- the learned model updating processing is implemented by the CPU 11 of the learned model generation apparatus 10 loading the program 30 stored in, for example, the ROM 12 or the HDD 14 onto the RAM 13 and executing the program 30 as needed.
- the learned model updating processing is started when the learned model generation apparatus 10 has received an updating instruction issued by a user operation.
- the learned model updating processing can be started when a predetermined number or more of training data sets have been generated by the training data set generation unit 353 and the learned model generation apparatus 10 has received an updating instruction from the main controller 150 at that time.
- step S 1101 the training data set receiving unit 31 receives training data sets generated by the training data set generation unit 353 and then transmitted by the training data set transmitting unit 355 , and stores the received training data sets in the RAM 13 or the HDD 14 .
- step S 1102 the learned model updating unit 32 reads the training data sets stored in step S 1101 one by one.
- step S 1103 the learned model updating unit 32 inputs the training data set read in step S 1102 to the above-mentioned machine learning model, and performs learning of the model with use of, for example, an error back propagation algorithm or a gradient descent method, thus obtaining weight coefficients W_in, W_out, and W_rec.
- step S 1104 the learned model updating unit 32 checks whether there is any data set that is not yet read and used for learning out of the training data sets received and stored in step S 1101 . Then, if it is determined that there is a training data set that is not yet used for learning (YES in step S 1104 ), the learned model updating unit 32 returns the processing to step S 1102 , in which the learned model updating unit 32 performs control in such a way as to perform learning processing using a next training data set.
- step S 1105 the learned model updating unit 32 advances the processing to step S 1105 .
- step S 1105 the learned model transmitting unit 33 transmits an updating instruction for the learned model to the user estimation unit 160 via the network 20 , and then ends the learned model updating processing.
- FIG. 9 is a flowchart illustrating an example of user estimation processing, which is a part of the program 360 incorporated in the user estimation unit 160 .
- the user estimation processing is implemented by the sub-CPU 161 of the user estimation unit 160 loading the program 360 stored in, for example, the sub-ROM 162 onto the sub-RAM 163 and executing the program 360 as needed. Furthermore, the user estimation processing is started when electric power is supplied to the user estimation unit 160 by the power source management unit 180 .
- step S 1201 the learned model receiving unit 361 checks whether the above-mentioned updating instruction for the learned model transmitted by the learned model transmitting unit 33 via the network 20 has been received. Then, if it is determined that the updating instruction for the learned model has been received (YES in step S 1201 ), the learned model receiving unit 361 advances the processing to step S 1202 .
- step S 1202 the learned model receiving unit 361 receives the latest learned model (for example, the weight coefficients W_in, W_out, and W_rec), which has been updated by the learned model updating unit 32 , from the learned model transmitting unit 33 . Additionally, the learned model receiving unit 361 updates a learned model stored in the sub-RAM 163 with the received latest learned model.
- the latest learned model for example, the weight coefficients W_in, W_out, and W_rec
- step S 1201 if it is determined that the updating instruction for the learned model has not been received (NO in step S 1201 ), the learned model receiving unit 361 advances the processing to step S 1203 .
- step S 1203 the human presence sensor data acquisition unit 363 acquires measured data (human presence sensor data) from the human presence sensor 120 .
- step S 1204 the user presence or absence estimation unit 362 estimates the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 , by inputting the measured data acquired in step S 1203 to the learned model stored in the sub-RAM 163 .
- step S 1205 the user presence or absence estimation unit 362 determines whether the estimation result obtained in step S 1204 has changed from “absence of user” to “presence of user”. More specifically, the user presence or absence estimation unit 362 determines whether the output Yt, which is an estimation accuracy of two values (0: a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 not being present, 1: a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 being present) has changed from less than 0.5 to greater than or equal to 0.5.
- step S 1205 if it is determined that the estimation result has changed from “absence of user” to “presence of user” (YES in step S 1205 ), the user presence or absence estimation unit 362 advances the processing to step S 1206 .
- step S 1206 the estimation result transmitting unit 364 communicates the determined estimation result to the power source management unit 180 and the main controller 150 , and then advances the processing to step S 1207 .
- the power source management unit 180 becomes able to cause the image processing apparatus 100 to return from the sleep mode to the standby mode.
- the main controller 150 is able to recognize that the estimation result has changed from “absence of user” to “presence of user”.
- step S 1205 it is determined that the estimation result has not changed from “absence of user” to “presence of user” (NO in step S 1205 ), the user presence or absence estimation unit 362 advances the processing to step S 1207 .
- step S 1207 the human presence sensor data acquisition unit 363 stores the measured data acquired in step S 1203 in the sub-RAM 163 . At that time, a predetermined number of previously-acquired pieces of measured data are assumed to be buffered in the sub-RAM 163 .
- step S 1208 If the image processing apparatus 100 does not become powered off (NO in step S 1208 ), the learned model receiving unit 361 returns the processing to step S 1201 , thus continuing the processing. Furthermore, if the image processing apparatus 100 becomes powered off (YES in step S 1208 ), the user estimation processing ends.
- FIG. 10 is a flowchart illustrating an example of training data set generation processing, which is a part of the program 350 incorporated in the main controller 150 .
- the training data set generation processing is implemented by the main CPU 151 of the main controller 150 loading the program 350 stored in, for example, the main ROM 152 or the HDD 154 onto the main RAM 153 and executing the program 350 as needed. Furthermore, the training data set generation processing is started in conjunction with the main controller 150 starting operating.
- step S 1301 the training data set generation unit 353 initializes a timer for measuring a time elapsing from reception of the estimation result transmitted from the estimation result transmitting unit 364 , and causes the timer to stop measuring time.
- step S 1302 the training data set generation unit 353 determines whether the timer is performing time measurement. Then, if it is determined that the timer is not performing time measurement (NO in step S 1302 ), the training data set generation unit 353 advances the processing to step S 1303 .
- step S 1303 the training data set generation unit 353 checks whether the human presence sensor data receiving unit 354 has received the communication of the estimation result transmitted in step S 1206 illustrated in FIG. 9 . Furthermore, the communication of the estimation result transmitted in step S 1206 illustrated in FIG. 9 is an estimation result communicated in a case where the estimation result obtained by the user estimation unit 160 has changed from “absence of user” to “presence of user”.
- step S 1303 the training data set generation unit 353 determines that the estimation result has changed from “absence of user” to “presence of user” and thus advances the processing to step S 1304 .
- step S 1304 the training data set generation unit 353 resets the timer to “0” and causes the timer to start measuring time, and then advances the processing to step S 1306 .
- step S 1302 it is determined that the timer is performing time measurement (YES in step S 1302 )
- the training data set generation unit 353 skips step S 1304 and advances the processing to step S 1306 .
- step S 1306 the training data set generation unit 353 determines whether a user operation is detected by the user operation detection unit 352 . Then, if no user operation is detected (NO in step S 1306 ), the training data set generation unit 353 advances the processing to step S 1307 .
- step S 1307 the training data set generation unit 353 determines whether the timer performing time measurement has detected passage of a predetermined time. If it is determined that the timer performing time measurement has not yet detected passage of the predetermined time (NO in step S 1307 ), the training data set generation unit 353 returns the processing to step S 1302 .
- step S 1307 it is determined that the timer performing time measurement has detected passage of the predetermined time (YES in step S 1307 )
- the training data set generation unit 353 advances the processing to step S 1309 .
- the training data set generation unit 353 receives, via the human presence sensor data receiving unit 354 , the measured data transmitted from the human presence sensor data transmitting unit 365 .
- the measured data as received here corresponds to a predetermined number of pieces of measured data which have been buffered in the sub-RAM 163 in step S 1207 illustrated in FIG. 9 .
- the training data set generation unit 353 generates a training data set in which labels are set to the received predetermined number of pieces of measured data.
- the training data set generation unit 353 generates a training data set in which the label “0” is set to all of the predetermined number of pieces of measured data.
- the label “0” represents “a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is not present”.
- the label “1” represents “a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present”.
- step S 1306 it is determined that a user operation is detected by the user operation detection unit 352 (YES in step S 1306 )
- the training data set generation unit 353 advances the processing to step S 1308 .
- This corresponds to a case where a user operation has been detected within the predetermined time elapsing from when “presence of user” has been estimated. In other words, this corresponds to “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user coincide with each other”. Details thereof are described below with reference to FIGS. 11A, 11B, and 11C .
- the training data set generation unit 353 receives, via the human presence sensor data receiving unit 354 , the measured data transmitted from the human presence sensor data transmitting unit 365 .
- the measured data as received here corresponds to a predetermined number of pieces of measured data which have been buffered in the sub-RAM 163 in step S 1207 illustrated in FIG. 9 .
- the training data set generation unit 353 generates a training data set in which labels are set to the received predetermined number of pieces of measured data.
- step S 1308 the training data set generation unit 353 generates a training data set in which the label “1” is set to measured data close to the time at which the user operation has been detected out of the predetermined number of pieces of measured data and the label “0” is set to the other pieces of measured data.
- the above-mentioned “measured data close to the time at which the user operation has been detected” is assumed to represent, for example, measured data obtained at the time at which the user operation has been detected and measured data obtained one time before that time.
- measured data obtained at timing at which the user operation has been detected to measured data obtained at two times before that timing can be set as the above-mentioned “measured data close to the time at which the user operation has been detected”.
- step S 1303 it is determined that the human presence sensor data receiving unit 354 has not received the estimation result (NO in step S 1303 ), the training data set generation unit 353 determines that “absence of user” has been estimated and thus advances the processing to step S 1305 .
- step S 1305 the training data set generation unit 353 determines whether a user operation is detected by the user operation detection unit 352 . Then, if no user operation is detected (NO in step S 1305 ), the training data set generation unit 353 returns the processing to step S 1302 .
- step S 1305 the training data set generation unit 353 advances the processing to step S 1308 .
- step S 1310 the training data set generation unit 353 stops time measurement by the timer.
- step S 1311 the training data set generation unit 353 returns the processing to step S 1302 , thus continuing processing. Furthermore, if the main controller 150 stops (NO in step S 1311 ), the training data set generation processing also ends.
- FIGS. 11A to 11C to FIGS. 13A to 13C are diagrams illustrating examples of training data sets which are generated by the training data set generation processing. Details thereof are described as follows.
- FIGS. 11A to 11C correspond to a case where, after the user presence or absence estimation unit 362 has estimated that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present, the user performs an operation with respect to the image processing apparatus 100 .
- FIGS. 11A to 11C correspond to “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user coincide with each other”.
- FIG. 11A illustrates an example of a trajectory of movement of the user during a period from time t 3 _ 0 to time t 3 _ 4 .
- FIG. 11B illustrates a list of “user presence or absence estimation results” and “user operation detection results” obtained at the respective times.
- “user presence or absence estimation result” represents a result which the user presence or absence estimation unit 362 outputs using, as an input, measured data acquired from the human presence sensor 120 at the time indicated by the value in the column of “time”, with use of the above-mentioned machine learning model.
- the user presence or absence estimation result “N” at time t 3 _ 0 indicates estimating that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is not present.
- the user presence or absence estimation result “Y” at time t 3 _ 3 indicates estimating that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present.
- “user operation detection result” represents a result which the user operation detection unit 352 detects at the time indicated by the value in the column of “time”.
- the user operation detection result “N” at time t 3 _ 0 indicates that no user operation is detected by the user operation detection unit 352 .
- the user operation detection result “Y” at time t 3 _ 4 indicates that a user operation is detected by the user operation detection unit 352 .
- FIG. 11C illustrates an example of a training data set which is generated under the state illustrated in FIG. 11B .
- the training data set 801 is a training data set which is generated in step S 1308 illustrated in FIG. 10 as a result of a user operation being detected in step S 1306 at time t 3 _ 4 .
- the training data set is generated in such a manner that the label “1” is set in association with pieces of human presence sensor data obtained at times close to the time t 3 _ 4 (for example, that time t 3 _ 4 and time t 3 _ 3 , which is one time before that time), and the label “0” is set to pieces of human presence sensor data obtained at the other times.
- Generating a training data set in this way enables increasing pieces of training data in “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user coincide with each other”.
- FIGS. 12A to 12C correspond to a case where, although the user presence or absence estimation unit 362 has estimated that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is not present, actually, a user with the intention to use the image processing apparatus 100 is present and the user performs an operation with respect to the image processing apparatus 100 .
- FIGS. 12A to 12C correspond to “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user do not coincide with each other”.
- FIG. 12A illustrates an example of a trajectory of movement of the user during a period from time t 4 _ 0 to time t 4 _ 3 .
- FIG. 12B illustrates a list of “user presence or absence estimation results” and “user operation detection results” obtained at the respective times.
- FIG. 12C illustrates an example of a training data set which is generated under the state illustrated in FIG. 12B .
- the training data set 901 is a training data set which is generated in step S 1308 illustrated in FIG. 10 as a result of a user operation being detected in step S 1305 at time t 4 _ 3 .
- the training data set is generated in such a manner that the label “1” is set in association with pieces of human presence sensor data obtained at times close to the time t 4 _ 3 (for example, that time t 4 _ 3 and time t 4 _ 2 , which is one time before that time), and the label “0” is set to pieces of human presence sensor data obtained at the other times.
- Generating a training data set in this way enables increasing pieces of learning data in “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user do not coincide with each other”.
- FIGS. 13A to 13C correspond to a case where, although the user presence or absence estimation unit 362 has estimated that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present, actually, a user with the intention to use the image processing apparatus 100 is not present.
- FIGS. 13A to 13C correspond to “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user do not coincide with each other”.
- FIG. 13A illustrates an example of a trajectory of movement of the user during a period from time t 5 _ 0 to time t 5 _ 6 .
- FIG. 13B illustrates a list of “user presence or absence estimation results” and “user operation detection results” obtained at the respective times.
- FIG. 13C illustrates an example of a training data set which is generated under the state illustrated in FIG. 13B .
- the training data set 1001 is a training data set which is generated in step S 1309 illustrated in FIG. 10 as a result of determining that, in steps S 1306 and S 1307 at time t 5 _ 6 , no user operation is detected even after a predetermined time elapses after it is estimated that a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 is present.
- Generating a training data set in this way enables increasing pieces of learning data in “a case where the estimation result about the presence or absence of a user who approaches the image processing apparatus 100 with the intention to use the image processing apparatus 100 and the actual presence or absence of such a user do not coincide with each other”.
- the training data set which is automatically generated is a set of data in which time-series data obtained from a human presence sensor, which is actually measured when the image processing apparatus is operating, and a label which indicates whether a user who uses the image processing apparatus is present at timing of each piece of the measured data are associated with each other. Performing learning with use of such a training data set enables appropriately establishing a model which estimates the presence of a user who uses an apparatus suited for each apparatus operating environment.
- Embodiment(s) can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as a ‘non-
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random access memory (RAM), a read-only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BDTM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Control Or Security For Electrophotography (AREA)
- Facsimiles In General (AREA)
- Accessory Devices And Overall Control Thereof (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JPJP2018-149173 | 2018-08-08 | ||
JP2018149173A JP7233868B2 (en) | 2018-08-08 | 2018-08-08 | Learning system for information processing device, information processing device, control method for information processing device, and program |
JP2018-149173 | 2018-08-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200053234A1 US20200053234A1 (en) | 2020-02-13 |
US11509781B2 true US11509781B2 (en) | 2022-11-22 |
Family
ID=69406738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/529,591 Active US11509781B2 (en) | 2018-08-08 | 2019-08-01 | Information processing apparatus, and control method for information processing apparatus |
Country Status (2)
Country | Link |
---|---|
US (1) | US11509781B2 (en) |
JP (1) | JP7233868B2 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6915638B2 (en) * | 2019-03-08 | 2021-08-04 | セイコーエプソン株式会社 | Failure time estimation device, machine learning device, failure time estimation method |
JP2023030344A (en) * | 2021-08-23 | 2023-03-08 | 富士フイルムビジネスイノベーション株式会社 | Image-forming apparatus |
US11782373B2 (en) * | 2022-01-07 | 2023-10-10 | Toshiba Tec Kabushiki Kaisha | Image forming apparatus with function of preventing secondary infection |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06189048A (en) | 1992-09-14 | 1994-07-08 | Ricoh Co Ltd | Controller for operation display, image forming device and controller for turning on power source |
JP2010147725A (en) | 2008-12-17 | 2010-07-01 | Canon Inc | Image processing apparatus and method of controlling the same |
US20110029465A1 (en) * | 2009-08-03 | 2011-02-03 | Masato Ito | Data processing apparatus, data processing method, and program |
US20110112997A1 (en) * | 2009-11-11 | 2011-05-12 | Kohtaro Sabe | Information processing device, information processing method, and program |
US20110319094A1 (en) * | 2010-06-24 | 2011-12-29 | Sony Corporation | Information processing apparatus, information processing system, information processing method, and program |
US20130232515A1 (en) * | 2011-12-02 | 2013-09-05 | Microsoft Corporation | Estimating engagement of consumers of presented content |
US20150261168A1 (en) * | 2014-03-17 | 2015-09-17 | Canon Kabushiki Kaisha | Image forming apparatus, method for controlling the same, and recording medium |
JP2017135748A (en) | 2017-04-24 | 2017-08-03 | キヤノン株式会社 | Printing device |
JP2018019361A (en) | 2016-07-29 | 2018-02-01 | キヤノン株式会社 | Device, method and program for detecting person using ultrasonic sensor |
US20180082206A1 (en) * | 2016-09-16 | 2018-03-22 | Foursquare Labs, Inc. | Passive Visit Detection |
US9959506B1 (en) * | 2014-06-17 | 2018-05-01 | Amazon Technologies, Inc. | Predictive content retrieval using device movements |
US20180122379A1 (en) * | 2016-11-03 | 2018-05-03 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US20180267592A1 (en) * | 2017-03-15 | 2018-09-20 | Ricoh Company, Ltd. | Information processing apparatus |
US10083006B1 (en) * | 2017-09-12 | 2018-09-25 | Google Llc | Intercom-style communication using multiple computing devices |
US20190050688A1 (en) * | 2017-08-09 | 2019-02-14 | Intel Corporation | Methods and apparatus to analyze time series data |
US10305766B1 (en) * | 2017-11-07 | 2019-05-28 | Amazon Technologies, Inc. | Coexistence-insensitive presence detection |
US10318890B1 (en) * | 2018-05-23 | 2019-06-11 | Cognitive Systems Corp. | Training data for a motion detection system using data from a sensor device |
US10558862B2 (en) * | 2018-01-12 | 2020-02-11 | Intel Corporation | Emotion heat mapping |
US10605470B1 (en) * | 2016-03-08 | 2020-03-31 | BrainofT Inc. | Controlling connected devices using an optimization function |
US20200104999A1 (en) * | 2018-09-28 | 2020-04-02 | Adobe Inc. | Tracking viewer engagement with non-interactive displays |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6696246B2 (en) * | 2016-03-16 | 2020-05-20 | 富士ゼロックス株式会社 | Image processing device and program |
JP2017228091A (en) * | 2016-06-22 | 2017-12-28 | 日本電信電話株式会社 | Walking state learning method, walking state estimation method, road surface condition comprehension method, device and program |
JP6436148B2 (en) * | 2016-11-18 | 2018-12-12 | 横河電機株式会社 | Information processing apparatus, maintenance device, information processing method, information processing program, and recording medium |
-
2018
- 2018-08-08 JP JP2018149173A patent/JP7233868B2/en active Active
-
2019
- 2019-08-01 US US16/529,591 patent/US11509781B2/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06189048A (en) | 1992-09-14 | 1994-07-08 | Ricoh Co Ltd | Controller for operation display, image forming device and controller for turning on power source |
US5822077A (en) * | 1992-09-14 | 1998-10-13 | Ricoh Company, Ltd. | Determination unit for determining, through detecting information associated with external object, whether or not external object will use functional apparatus or who is external object, so as to provide appropriate service |
JP2010147725A (en) | 2008-12-17 | 2010-07-01 | Canon Inc | Image processing apparatus and method of controlling the same |
US20140029037A1 (en) * | 2008-12-17 | 2014-01-30 | Canon Kabushiki Kaisha | Image processing apparatus and method of controlling the same |
US20110029465A1 (en) * | 2009-08-03 | 2011-02-03 | Masato Ito | Data processing apparatus, data processing method, and program |
US20110112997A1 (en) * | 2009-11-11 | 2011-05-12 | Kohtaro Sabe | Information processing device, information processing method, and program |
JP2012008771A (en) | 2010-06-24 | 2012-01-12 | Sony Corp | Information processing device, information processing system, information processing method and program |
US20110319094A1 (en) * | 2010-06-24 | 2011-12-29 | Sony Corporation | Information processing apparatus, information processing system, information processing method, and program |
US20130232515A1 (en) * | 2011-12-02 | 2013-09-05 | Microsoft Corporation | Estimating engagement of consumers of presented content |
US20150261168A1 (en) * | 2014-03-17 | 2015-09-17 | Canon Kabushiki Kaisha | Image forming apparatus, method for controlling the same, and recording medium |
US9959506B1 (en) * | 2014-06-17 | 2018-05-01 | Amazon Technologies, Inc. | Predictive content retrieval using device movements |
US10605470B1 (en) * | 2016-03-08 | 2020-03-31 | BrainofT Inc. | Controlling connected devices using an optimization function |
JP2018019361A (en) | 2016-07-29 | 2018-02-01 | キヤノン株式会社 | Device, method and program for detecting person using ultrasonic sensor |
US20180082206A1 (en) * | 2016-09-16 | 2018-03-22 | Foursquare Labs, Inc. | Passive Visit Detection |
US20180122379A1 (en) * | 2016-11-03 | 2018-05-03 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US20180267592A1 (en) * | 2017-03-15 | 2018-09-20 | Ricoh Company, Ltd. | Information processing apparatus |
JP2017135748A (en) | 2017-04-24 | 2017-08-03 | キヤノン株式会社 | Printing device |
US20190050688A1 (en) * | 2017-08-09 | 2019-02-14 | Intel Corporation | Methods and apparatus to analyze time series data |
US10083006B1 (en) * | 2017-09-12 | 2018-09-25 | Google Llc | Intercom-style communication using multiple computing devices |
US10305766B1 (en) * | 2017-11-07 | 2019-05-28 | Amazon Technologies, Inc. | Coexistence-insensitive presence detection |
US10558862B2 (en) * | 2018-01-12 | 2020-02-11 | Intel Corporation | Emotion heat mapping |
US10318890B1 (en) * | 2018-05-23 | 2019-06-11 | Cognitive Systems Corp. | Training data for a motion detection system using data from a sensor device |
US20200104999A1 (en) * | 2018-09-28 | 2020-04-02 | Adobe Inc. | Tracking viewer engagement with non-interactive displays |
Non-Patent Citations (1)
Title |
---|
Thomas Price, Eytan Lerba, "Machine Learning to Automatically Lock Device Screen at Opportune Time", Dec. 15, 2017, ip.com, IP.com No. IPCOM000252082D, pp. 1-35 (Year: 2017). * |
Also Published As
Publication number | Publication date |
---|---|
US20200053234A1 (en) | 2020-02-13 |
JP2020024597A (en) | 2020-02-13 |
JP7233868B2 (en) | 2023-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11509781B2 (en) | Information processing apparatus, and control method for information processing apparatus | |
US9367005B2 (en) | Imaging device and method for determining operating parameters | |
CN106341569B (en) | The control method of image forming apparatus and the image forming apparatus | |
US20140002843A1 (en) | Image forming apparatus and control method therefor | |
US10303102B2 (en) | Image forming apparatus and image forming system | |
US11022923B2 (en) | Image forming apparatus, and method for controlling the same | |
CN104243751A (en) | IMAGE FORMING APPARATUS and METHOD FOR CONTROLLING IMAGE FORMING APPARATUS | |
US9071716B2 (en) | Sheet carrying apparatus and image forming apparatus having sheet detection power shifter | |
US20150185807A1 (en) | Information processing apparatus, method for controlling information processing apparatus, program, and recording medium | |
JP2006205671A (en) | Information processor and its controlling method | |
US11200472B2 (en) | Identification apparatus, processing method, and non-transitory computer-readable storage medium | |
US12010280B2 (en) | Machine learning device, machine learning method, and machine learning program | |
JP2013257459A (en) | Abnormality cause identifying device, maintenance execution period prediction system with the same, and abnormality cause identifying method | |
EP2998828B1 (en) | Printing apparatus, method for controlling printing apparatus, and recording medium | |
JP2008155482A (en) | Image printer and control method | |
US10838672B2 (en) | Image forming apparatus | |
US9389552B2 (en) | Image forming apparatus with fixing member temperature control | |
US20210195045A1 (en) | Image forming apparatus to detect user and method for controlling thereof | |
US11644780B2 (en) | Image forming apparatus that provides management apparatus with data that can be utilized for data analysis, control method for the image forming apparatus, storage medium, and management system | |
JP6347231B2 (en) | Image forming apparatus, image forming system, and image forming method | |
US11402786B2 (en) | Recording medium conveyance device, recording medium conveyance method and non-transitory computer-readable recording medium encoded with recording medium conveyance program | |
US11281144B2 (en) | Image forming apparatus and control method | |
US20230022271A1 (en) | Media sensor of image forming device for compensating transmitted light amount based on temperature data | |
JP6753284B2 (en) | Image processing device, mode switching method and mode switching program | |
JP6608481B2 (en) | Information processing apparatus, information processing apparatus control method, program, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAKAHASHI, TORU;REEL/FRAME:050993/0462 Effective date: 20190719 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |