WO2023127377A1 - Electronic apparatus, control method, and program - Google Patents

Electronic apparatus, control method, and program Download PDF

Info

Publication number
WO2023127377A1
WO2023127377A1 PCT/JP2022/044010 JP2022044010W WO2023127377A1 WO 2023127377 A1 WO2023127377 A1 WO 2023127377A1 JP 2022044010 W JP2022044010 W JP 2022044010W WO 2023127377 A1 WO2023127377 A1 WO 2023127377A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
imaging
application
human body
electronic device
Prior art date
Application number
PCT/JP2022/044010
Other languages
French (fr)
Japanese (ja)
Inventor
征志 中田
功一朗 井上
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2023127377A1 publication Critical patent/WO2023127377A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • H04M1/73Battery saving arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply

Definitions

  • the present disclosure relates to electronic equipment, control methods, and programs.
  • Patent Literature 1 describes an electronic device that shifts to high power consumption operation when a reference object such as a face is detected during low power consumption sensing.
  • Patent Document 1 After shifting to high power consumption operation, the user is required to launch an application for camera shooting or information registration, for example. That is, with the technique described in Patent Document 1, it is difficult to achieve both suppression of power consumption and improvement of operability in electronic devices.
  • One object of the present disclosure is to provide an electronic device, a control method, and a program that can improve operability when executing an application while reducing power consumption, for example.
  • an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode
  • an application processor that executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging in the second mode of the imaging device; be.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode
  • the application processor executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the image sensor in the second mode. It is a program to be executed by
  • an imaging element that performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode; and an application processor that executes a predetermined application when the result of detection is a visible state.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode.
  • An application processor is a program that causes a computer to execute a control method for executing a predetermined application when the result of detection is the visible state.
  • FIG. 4 is a flowchart for explaining an operation example of a smart phone according to one embodiment;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected;
  • It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification. It is a figure for demonstrating a modification.
  • Mobile terminals such as smartphones and smartwatches (clock-type mobile terminals) turn off the power of the display (hereinafter referred to as sleep mode) or input a predetermined password if there is no operation input for a certain period of time. Otherwise, it is common to transition to a state in which operation input cannot be accepted (hereinafter, appropriately referred to as a locked state).
  • many applications have been distributed via networks such as the Internet, and it has become possible to install many applications on mobile terminals. These applications include applications for making payments, applications for taking pictures, applications for playing games, applications for making purchases, usage history of medicines, exercise history such as walking, pulse, weight, and blood pressure.
  • An application for registering biometric information such as a map application, can be exemplified.
  • the user launches and executes an application when the mobile device is locked.
  • the user first performs an operation to unlock the mobile terminal.
  • the user enters a password consisting of numbers and letters into the mobile terminal.
  • the user selects a desired application from many applications installed on the mobile terminal and activates the application by performing a touch operation on the icon of the application. .
  • the user has to perform many operations in order to start and run one application.
  • the sleep state and lock state can be automatically released, the operation for selecting and executing the application is still required, and the operability cannot be further improved.
  • a smartphone will be described as an example of an electronic device.
  • mobile terminals such as tablet computers and smart watches as electronic devices.
  • FIG. 1 is a diagram showing an example of the appearance of a smartphone (smartphone 100) according to this embodiment.
  • Smartphone 100 has housing 11 .
  • a display 12 is provided on one main surface of the housing 11 .
  • a front camera 13 ⁇ /b>A that captures an image of the user of the smartphone 100 or the like is provided, for example, on the upper side of the display 12 .
  • a rear camera 13B is provided on the main surface opposite to the main surface on which the display 12 is provided. A part of a human body and a predetermined object can be imaged by the front camera 13A and the rear camera 13B.
  • a button 14 for turning on/off the power is provided on the side surface of the housing 11 .
  • FIG. 2 is a block diagram showing an internal configuration example of the smartphone 100 according to this embodiment.
  • the smartphone 100 includes a control unit 101, a microphone 102, an audio signal processing unit 103 connected to the microphone 102, an imaging unit 104, a network unit 105, a network signal processing unit 106 connected to the network unit 105, It has a speaker 107 , an audio reproduction unit 108 connected to the speaker 107 , the display 12 described above, a screen display unit 109 connected to the display 12 , a position sensor 110 and a sensor 111 .
  • the audio signal processing section 103 , imaging unit 104 , network signal processing section 106 , audio reproduction section 108 , screen display section 109 , position sensor 110 and sensor 111 are each connected to the control section 101 .
  • the control unit 101 is composed of a CPU (Central Processing Unit) and the like.
  • the control unit 101 has a ROM (Read Only Memory) in which the program is stored, a RAM (Random Access Memory) used as a work area when the program is executed, and the like (not shown). there is.).
  • the control unit 101 comprehensively controls the operation of the smartphone 100 .
  • the control unit 101 has an application processor 101A as a functional block.
  • the application processor 101A detects the human body.
  • an application for example, software that performs a specific process
  • execution of the application includes at least activation of the application.
  • the execution of the application may include processing for displaying an input screen for each application on the display 12 in response to activation, or may include control for automatically performing registration according to the application. It is possible to set as appropriate how much processing after the application is started is included in the execution of the application.
  • the smartphone 100 may store a plurality of applications. In this case, execution of the application may include a process of selecting a predetermined application from a plurality of applications.
  • the microphone 102 picks up the user's speech and the like.
  • the audio signal processing unit 103 performs known audio signal processing on audio data of sounds picked up via the microphone 102 .
  • the imaging unit 104 includes, for example, an optical system 104A such as a lens and an imaging element 104B.
  • a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor can be applied as the imaging element 104B.
  • the image sensor 104B has a signal processing circuit 104C.
  • it is configured as a one-chip sensor in which the imaging device 104B and the signal processing circuit 104C are stacked.
  • the network unit 105 includes an antenna and the like.
  • a network signal processing unit 106 performs modulation/demodulation processing, error correction processing, and the like on data transmitted and received via the network unit 105 .
  • the audio reproduction unit 108 performs processing for reproducing sound from the speaker 107 .
  • the audio reproduction unit 108 performs known audio signal processing such as amplification processing and D/A conversion processing, for example.
  • the screen display unit 109 performs known processing for displaying various information on the display 12 .
  • the screen display unit 109 performs processing for displaying a UI corresponding to the application on the display 12 under the control of the application processor 101A.
  • the display 12 may be configured as a touch panel. In this case, the screen display unit 109 also performs detection processing of the operation position associated with the touch operation.
  • the position sensor 110 is a positioning unit that measures the current position using, for example, a system called GNSS (Global Navigation Satellite System).
  • GNSS Global Navigation Satellite System
  • the sensor 111 is a general term for sensors other than the image sensor 104B and the position sensor 110.
  • the sensor 111 for example, a sensor that detects the movement and state of the smartphone 100 can be used.
  • the sensor 111 includes an acceleration sensor, a gyro sensor, an electronic compass, and the like.
  • Sensors 111 may also include sensors that detect the surrounding environment. These sensors include a sensor that measures temperature, a sensor that measures humidity, a sensor that measures atmospheric pressure, a sensor that measures ambient brightness (illuminance), and the like.
  • the sensor 111 may include a sensor that detects biometric information of the user of the smart phone 100 . These sensors include sensors that detect the user's blood pressure, pulse, sweat glands, body temperature, and fingerprints.
  • the imaging element 104B has a pixel array section 121 in which light receiving elements such as photodiodes are arranged two-dimensionally.
  • the pixel array section 121 includes a horizontal scanning circuit, a vertical scanning circuit, an A/D (Analog to Digital) circuit, etc., which are connected to the light receiving elements (not shown).
  • the image sensor 104B includes, as a signal processing circuit 104C, a system controller 131, a power control unit 132, a clock generation circuit 133, a ring oscillator 134 connected to the clock generation circuit 133, a PLL (Phase Locked Loop) 135, a BIAS 136, a timing It has a generator 137 , a sensor I/F (Interface) 138 , a detection section 139 , a camera signal processing section 140 and an output I/F 141 .
  • the system controller 131 , power control section 132 , timing generator 137 , detection section 139 , camera signal processing section 140 and output I/F 141 are connected to each other via a bus 151 .
  • the system controller 131 has, for example, a microprocessor, and centrally controls the operation of each unit in the imaging device 104B.
  • the system controller 131 has an output I/F 131A. Commands are transmitted and received between the system controller 131 and the control unit 101 (for example, the application processor 101A) via the output I/F 131A. Commands are transmitted and received via a serial interface such as I2C .
  • the power control unit 132 controls power supplied to each unit. Although the details will be described later, the power control unit 132 controls power supplied to each unit according to the operation mode of the imaging element 104B.
  • the clock generation circuit 133 generates clock signals based on the outputs of the ring oscillator 134 and the PLL 135 .
  • a clock signal generated by the clock generating circuit 133 is supplied to each part of the imaging element 104B, and operations are performed based on the clock signal.
  • the BIAS 136 generates and supplies a stable reference voltage and reference current to each circuit that controls the pixel array section 121 and processes output signals.
  • the timing generator 137 generates various timing signals. Various timing signals generated by the timing generator are supplied to the pixel drive circuit of the pixel array section 121 and the sensor I/F 138 .
  • the sensor I/F 138 is an interface for outputting image data (for example, digitized image data) output from the pixel array unit 121 to a subsequent stage.
  • the sensor I/F 138 operates based on timing signals supplied from the timing generator 137 .
  • a detection unit 139 and a camera signal processing unit 140 are connected to the rear stage of the sensor I/F 138 .
  • the detection unit 139 Based on the image data supplied from the sensor I/F 138, the detection unit 139 detects whether or not the image data includes a part of the human body such as a face or a hand, or a predetermined object.
  • the predetermined object may be a part of the human body, an object grasped by a part of the human body, or an object existing at a position separate from the part of the human body.
  • a learning model obtained by performing machine learning such as DNN (Deep Neural Network) may be applied to the detection processing performed by the detection unit 139 .
  • the detection unit 139 notifies the detection result to the system controller 131 via the bus 151 .
  • the camera signal processing unit 140 performs known image processing on image data supplied from the sensor I/F 138 .
  • Known image processing includes interpolation processing, color correction, defect correction, and the like.
  • Image data subjected to camera signal processing by the camera signal processing unit 140 is supplied to the control unit 101 via the output I/F 141 .
  • An image based on the image data is displayed on the display 12 by operating the screen display unit 109 based on the control of the control unit 101 .
  • the output I/F 141 for example, MIPI (Mobile Industry Processor Interface) can be applied.
  • front camera 13A and the rear camera 13B may each have a configuration related to the imaging unit 104 described above, or may have a configuration in which a part of the configuration is shared.
  • the smartphone 100 always performs a process of detecting a part of the human body and a predetermined object while the power is on. For example, even in a sleep state or a locked state, a process of detecting a part of the human body and a predetermined object is performed. Of course, there may be a period during which a part of the human body and a predetermined object are not detected, such as during operation of the smartphone 100 .
  • the part of the human body to be detected and the predetermined object may be set by the user, or may be preset in the smartphone 100. Data may be downloaded from a server and set in smartphone 100 .
  • processing for detecting a part of the human body and a predetermined object is performed by operating the imaging device 104B with low power consumption.
  • the imaging device 104B in addition to the normal mode (an example of the first mode) for performing normal imaging such as imaging an object, the power consumption of the imaging element 104B is lower than in the normal mode, and the outer shape of the object is reduced.
  • the imaging element 104B is operable in a detectable mode, more specifically, in a detection mode (an example of the second mode) for detecting parts of the human body and predetermined objects.
  • the imaging element 104B has a mode in which imaging is performed with lower power consumption than in the detection mode.
  • This mode is a mode that can detect the presence or absence of a moving object (the shape of the moving object is not captured), and this mode is hereinafter referred to as a moving object detection mode.
  • the imaging device 104B has an imaging mode that consumes more power than the detection mode and can obtain image data that allows recognition of the pattern of the object. This mode is hereinafter referred to as recognition mode.
  • recognition mode for example, object identification is performed by multi-wavelength sensing.
  • the power consumption in the detection mode is made smaller than the power consumption in the normal mode.
  • the imaging parameters include parameters related to the amount of image data obtained by imaging and parameters related to driving during imaging. Specific examples of the former include resolution, gradation, color, imaging region (ROI (Region of Interest)), and wavelength resolution of image data.
  • ROI imaging region
  • the power consumption in imaging in the detection mode is lower than the power consumption in imaging in the normal mode. can also be made smaller.
  • the driving clock and frame rate during imaging in the detection mode are set lower than those during imaging in the normal mode, and the number of functional blocks in the operating state is reduced as much as possible to reduce the power consumption during imaging in the detection mode. can be made smaller than the power consumption in imaging in the normal mode.
  • the power consumption in imaging in the moving object detection mode can be made smaller than the power consumption in imaging in the detection mode.
  • the recognition mode is set with larger imaging parameters than the moving object detection mode and the detection mode. Note that the imaging parameters in the normal mode and the imaging parameters in the recognition mode may be the same. However, since the imaging area is optimized in the recognition mode, the smartphone 100 operates with lower power consumption than in the normal mode.
  • control of each functional block for each mode is as follows, for example. ⁇ Blocks that are supplied with power and operate as usual regardless of the mode: system controller 131, BIAS 136 ⁇ Blocks that do not operate because power is not supplied in the normal mode: power control section 132, detection section 139, ring oscillator 134 ⁇ Blocks that operate only in normal mode: PLL 135, camera signal processing unit 140, output I/F 141 A block in which power consumption is optimized by adjusting imaging parameters in detection mode, moving object detection mode, and recognition mode: pixel array section 121, clock generation circuit 133, timing generator 137, sensor I/ F138
  • the application processor 101A of the control unit 101 is in a non-operating state or a low power consumption operating state (hereinafter , collectively referred to as hibernation as appropriate). Thereby, not only the power consumption of the image sensor 104B but also the power consumption of the control unit 101 can be reduced.
  • control is performed to set the operation mode of the smartphone 100 to the moving object detection mode.
  • imaging in the moving object detection mode is started when the smartphone 100 is in a sleep state or locked state, or when the position information is within a predetermined range.
  • the control unit 101 notifies the system controller 131 of the imaging device 104B of that fact.
  • the system controller 131 that has received the notification from the control unit 101 activates the functional blocks necessary for imaging in the moving object detection mode, and deactivates the functional blocks that are unnecessary for imaging in the moving object detection mode.
  • the system controller 131 controls the power control section 132, the ring oscillator 134, and the detection section 139 to operate.
  • the system controller 131 also controls the PLL 135, the camera signal processing unit 140, and the output I/F 141 to be in a rest state.
  • the system controller 131 controls the driving circuit of the pixel array unit 121, the clock generation circuit 133, the timing generator 137, the sensor, and the like, using imaging parameters (for example, resolution, gradation, driving clock, frame rate, etc.) corresponding to the moving object detection mode.
  • An imaging parameter is set for each unit so that the I/F 138 operates.
  • the parameter corresponding to the moving body detection mode is, for example, a parameter set in advance to the extent that the presence or absence of a moving body can be detected.
  • the presence or absence of a moving object can be detected from changes in pixel output, so the number of drive pixels may theoretically be one.
  • Power consumption in functional blocks operating with parameters corresponding to the operation detection mode may be smaller than power consumption during operation in the detection mode or the normal mode. Control for reducing the power supply is performed by the power control section 132 . Then, the process proceeds to step ST12.
  • step ST12 it is determined whether or not a moving object has been detected as a result of imaging in the moving object detection mode.
  • the detection unit 139 determines whether or not a moving object is detected based on image data output from the sensor I/F 138 .
  • the detection unit 139 determines that there is a moving object when the pixel value of the pixel output at each predetermined timing changes by a certain amount or more, and determines that there is no moving object when the pixel value does not change by a certain amount or more. to decide.
  • the detector 139 notifies the system controller 131 of the determination result. If there is no moving object (No), the process returns to step ST11, and imaging in the moving object detection mode is repeated. If there is a moving object (Yes), the process proceeds to step ST13.
  • step ST13 imaging is performed in the detection mode.
  • the system controller 131 notified of the determination result that there is a moving object from the detection unit 139 uses the imaging parameters corresponding to the detection mode to drive the pixel array unit 121, the clock generation circuit 133, the timing generator 137, and the sensor interface.
  • the imaging parameters are set for each unit so that F138 operates.
  • the parameter corresponding to the detection mode is, for example, a parameter set in advance to the extent that a part of the human body and a predetermined object can be detected. Since it is necessary to improve the recognition accuracy in the detection mode more than in the moving body detection mode, it is necessary to increase the power supplied to the functional block compared to the detection mode. Control for increasing power supply is performed by the power control unit 132 . Then, the process proceeds to step ST14.
  • step ST14 it is determined whether or not a part of the human body and a predetermined object are detected as a result of imaging in the detection mode. For example, based on the image data output from the sensor I/F 138, the detection unit 139 determines whether a part of the human body and a predetermined object have been detected.
  • a known process can be applied as the detection process. For example, as processing for detecting a part of the human body, processing for detecting skin color, processing for detecting feature points such as joints, and the like can be applied.
  • As the process for detecting a predetermined object a process using a learning model obtained by machine learning, a process such as contour detection, etc. can be applied.
  • the detector 139 notifies the system controller 131 of the determination result.
  • the system controller 131 transmits an interrupt to the control unit 101 .
  • the control unit 101 that has received the interrupt changes the operation state of the application processor 101A from the sleep state to the operation state. If the part of the human body and the predetermined object are not detected (No), the process returns to step ST13, and the imaging in the detection mode is repeated. If the part of the human body and the predetermined object are detected (Yes), the process proceeds to step ST15.
  • step ST15 imaging is performed in the recognition mode.
  • the detection unit 139 supplies image data obtained by imaging in the recognition mode to the system controller 131 .
  • the system controller 131 recognizes a part of the human body or a predetermined object, obtains detailed information about the part of the human body or the predetermined object, and recognizes an application to be activated based on this information.
  • a recognition result is supplied to the application processor 101A. Note that, in imaging in the recognition mode, for example, only a partial area of the human body or a predetermined object area detected in the detection mode is imaged with high image quality (for example, at high resolution). As a result, it is possible to suppress power consumption in imaging in the recognition mode. Then, the process proceeds to step ST16.
  • step ST16 the application processor 101A, which has entered an operating state, launches an application corresponding to the recognition result supplied from the system controller 131. Then, the process proceeds to step ST17.
  • the application started by the application processor 101A is executed.
  • the execution result of the application be notified so that the user can recognize it.
  • the execution result of the application be displayed on the display 12 or be notified by voice through the speaker 107 .
  • the moving object detection mode, detection mode, and recognition mode may use the front camera 13A or the rear camera 13B in some cases.
  • a first example is an example in which the part of the human body is the user's hand, the predetermined object is an object held or gripped by the user's hand, and the application is a camera.
  • the application processor 101A recognizes a part of the human body as an instruction to operate an object (for example, an instruction to capture an image), and executes control for capturing an image with the camera according to the recognition result.
  • image data including the user's hands 21A and 21B and the document 22 is obtained as a result of imaging in the detection mode. Further, as a result of imaging in the recognition mode, it is detected that the hands 21A and 21B are holding the document 22.
  • FIG. 5A image data including the user's hands 21A and 21B and the document 22 is obtained as a result of imaging in the detection mode. Further, as a result of imaging in the recognition mode, it is detected that the hands 21A and 21B are holding the document 22.
  • the detection unit 139 notifies the system controller 131 that a part of the human body and a predetermined object have been detected.
  • the detection unit 139 also outputs image data obtained in the recognition mode to the system controller 131 .
  • the system controller 131 recognizes the contents of operations corresponding to characters and regions of a document and gestures by performing recognition processing on image data. In this example, since both hands (hands 21A and 21B) are detected and an object is present between both hands, the system controller 131 determines that the camera is the application to be started, and that the camera is positioned between both hands. It is recognized as an operation instruction to image an object.
  • the system controller 131 notifies the application processor 101A of the recognition result.
  • the application processor 101A that has received the notification controls the imaging unit 104 to perform imaging.
  • the imaging unit 104 operates according to the control, and the imaging unit 104 performs imaging. Specifically, the area where the document 22 exists is imaged by the imaging device 104B. Note that the document 22 is considered to exist in the ROI, so it is preferable that the image of the document 22 is captured with high image quality. Therefore, the document 22 is imaged, for example, in the normal mode.
  • the image data of the document 22 obtained by imaging in the normal mode is subjected to image processing for improving image quality by the camera signal processing unit 140, and the image data subjected to image processing (see FIG. 5C) is output I /F141 to the control unit 101. Image data of the document 22 is displayed on the display 12 or stored in memory.
  • a second example is an example in which the part of the human body is the user's finger and the predetermined object is an object positioned at the fingertip.
  • the object may be grasped with fingertips.
  • the application launched in this case is an application that registers information related to the object located at the fingertip.
  • the information related to the object positioned at the fingertip is, for example, information related to the usage history of the object.
  • the detection unit 139 outputs to the system controller 131 that the part of the human body and the predetermined object have been detected.
  • image data obtained in the recognition mode is supplied to the system controller 131 .
  • the system controller 131 recognizes the positional information of the fingertips of the thumb 23A and the index finger 23B, the fact that the object at the fingertips is the drug tablet 24, the type of the drug, and the like. Furthermore, it recognizes the content of the operation corresponding to the gesture.
  • the system controller 131 recognizes that the application to be started is an application for registering the use history of the medicine tablet 24 .
  • the system controller 131 notifies the recognition result to the application processor 101A.
  • the application processor 101A Upon receiving the notification, the application processor 101A activates an application for registering the medicine usage history. Then, the medicine taking history (for example, the type of medicine taken and the date and time) is registered in the application.
  • FIG. 6B shows an example of a drug registration screen in the application. A screen shown in FIG. 6B is displayed on the display 12 . According to this example, it is possible to easily register the taking history of medicines that tend to be forgotten.
  • the object located at the fingertip may be the date on the calendar.
  • the information related to the date on the calendar can include the schedule entered in the column for that date.
  • the schedule function of the smartphone 100 is activated as an application, and the schedule, which is information related to the date of the calendar, is registered in the schedule function.
  • the part of the human body may be, for example, a face instead of a finger
  • the application may be an application that registers the use history of cosmetics instead of the history of taking medicine.
  • the user's face 25 and the container 26 are detected by imaging in the detection mode, as shown in FIG.
  • the imaging in the recognition mode allows the system controller 131 to recognize that the container 26 is a container for lotion.
  • the system controller 131 recognizes that the application to be activated is an application for registering the use history of cosmetics, for example.
  • the system controller 131 notifies the recognition result to the application processor 101A.
  • the application processor 101A Upon receiving the notification, the application processor 101A activates an application for registering the use history of cosmetics. Then, the use history of cosmetics (for example, the type of cosmetics used and the date and time of use) is registered in the application. In this way, the combination of the part of the human body and the predetermined object can be changed as appropriate, and the application activated according to the combination can also be changed as appropriate.
  • the use history of cosmetics for example, the type of cosmetics used and the date and time of use
  • a part of the human body and a part of the human body different from the part of the human body are detected by imaging in the detection mode, and each part of the human body is detected.
  • an application corresponding to the combination is executed.
  • the human body part is the fingers of one hand
  • the human body part different from the human body part is the fingers of the other hand.
  • An example of an alarm is an application that is activated according to the combination of fingers on each hand.
  • image data including fingers 31 of one hand and fingers 32 of the other hand are obtained as a result of imaging in the detection mode.
  • the detection unit 139 notifies the system controller 131 that the fingers of each hand have been detected.
  • image data obtained by imaging in the recognition mode is supplied to the system controller 131 .
  • the system controller 131 recognizes the positions of the fingers 31 and 32 and the number of fingers by performing recognition processing on the image data. In this example, since the fingers 31 and 32 are present in the image data so as to overlap each other, the system controller 131 recognizes that the application to be activated is, for example, an alarm.
  • the system controller 131 notifies the recognition result to the application processor 101A.
  • the application processor 101A Upon receiving the notification, the application processor 101A activates an application that sets an alarm. Then, the application processor 101A sets an alarm at a time (7 o'clock in this example) corresponding to the total number of fingers 31 and 32 (7 in this example).
  • FIG. 8B shows an example of an alarm setting screen in the application. The screen shown in FIG. 8B (the screen in which the alarm is automatically set at 7:00) is displayed on the display 12 . According to this example, an alarm can be easily set.
  • the application in this example is not limited to alarms.
  • the application in this example may be the settings of the smart phone 100 .
  • Smartphone 100 has a mode corresponding to private use of smartphone 100 (hereinafter referred to as private mode as appropriate) and a mode corresponding to work use of smartphone 100 (hereinafter referred to as office mode as appropriate). can be set.
  • the application corresponding to the finger combination may be the control for setting either private mode or office mode.
  • the private mode and the office mode differ, for example, in the destination communication carrier (line operator), the type of application for which various notifications are made, and the arrangement of icons on the display 12 .
  • image data including fingers 31 of one hand and fingers 32 of the other hand is obtained as a result of imaging in the detection mode.
  • the detection unit 139 notifies the system controller 131 that the fingers 31 and 32 have been detected.
  • image data obtained by imaging in the recognition mode is supplied to the system controller 131 .
  • the system controller 131 recognizes the positions and number of the fingers 31 and 32 by performing recognition processing on the image data, and further recognizes that the application to be activated is mode setting control.
  • the system controller 131 notifies the recognition result to the application processor 101A.
  • the application processor 101A that received the notification launches the application for setting the mode.
  • the application processor 101A makes settings corresponding to the private mode.
  • the application processor 101A performs settings corresponding to the office mode.
  • the number of fingers corresponding to each mode is not limited to six or seven.
  • imaging in the detection mode can be performed for a long time, various actions (a combination of a part of the human body and a predetermined object) can be detected, and an application corresponding to the detection result can be activated. That is, it is possible to easily start not only applications that are likely to be executed in a specific time period, but also applications that can be executed at various timings and locations (for example, applications for imaging, payment, scheduling, etc.).
  • the imaging element 104B can perform imaging in the normal mode and in a mode with lower power consumption than the normal mode, as in the first embodiment.
  • the imaging mode with low power consumption means that power is supplied to an appropriate functional block of the signal processing circuit 104C at least within a range in which it is possible to detect whether or not the electronic device is being viewed by the user. , which is a mode for suppressing power consumption.
  • the signal processing circuit 104C to which power corresponding to this mode is supplied performs processing for detecting whether or not the user is in the viewing state.
  • the imaging mode in which the user can detect whether or not the electronic device is in the visible state and consumes less power than the normal mode is hereinafter referred to as the visible state detection mode as appropriate.
  • the visible state means a state in which the user is viewing an electronic device such as a smartphone.
  • the user in the visible state is appropriately referred to as the visible state person.
  • a specific example of the user is a person, but may be an animal such as a robot or a pet.
  • the application processor 101A executes a predetermined application when the result of detection by the imaging element 104B is the visible state. Specific contents of the modified example will be described below.
  • FIG. 10A shows a state in which user U of smartphone 100 is viewing a map displayed on display 12 .
  • the smartphone 100 side uses the front camera 13A to determine whether or not the user U is looking at the smartphone 100, that is, whether or not it is in a visible state.
  • the application processor 101A When it is detected that the user U is in the visible state, the application processor 101A performs obstacle detection processing as an example of a predetermined application. Specifically, as shown in FIG. 10B, an image is captured using the rear camera 13B. For example, when the rear camera 13B is controlled to be off, the rear camera 13B is activated. Then, after the rear camera 13B is set to perform imaging in the normal mode, imaging by the rear camera 13B is started. Since the rear camera 13B takes an image in the normal mode, the image quality of the image obtained by the image pick-up is higher than that of the image obtained by the image pick-up in the visual recognition state detection mode. Obstacle detection processing for detecting obstacles is performed using this image.
  • Obstacles include, for example, people, mobile objects such as cars and bicycles, objects in front of the user U, and the like. Since the obstacle detection process is performed using the high-quality image, it is possible to perform the obstacle detection process with high accuracy. When an obstacle in front of the user U is detected as a result of the obstacle detection process, the user U is notified that there is an obstacle ahead by sounding an alarm or vibrating the smartphone 100 .
  • the user U stops looking at the smartphone 100 as shown in FIG. 10C. In this case, the non-visible state is detected. Since it is not in the visible state, the imaging by the rear camera 13B ends. Note that, for example, while the smartphone 100 is controlled to be powered on, the front camera 13A continues to capture images in the visual recognition state detection mode.
  • FIG. 11 is a flowchart showing the flow of processing performed in this modified example.
  • step ST31 an image is captured in the visual recognition state detection mode using the front camera 13A.
  • the viewing state detection mode power is supplied to the pixel array unit 121, the system controller 131, the sensor I/F 138, and the detection unit 139, for example. Then, the process proceeds to step ST32.
  • viewing state detection processing is performed to determine whether or not the user U of the smartphone 100 is in the viewing state based on the image captured in the viewing state detection mode. For example, if the image captured in the visible state detection mode includes eyes, it is determined to be in the visible state, and if the image captured in the visible state detection mode does not include eyes, the state is not in the visible state ( hereinafter referred to as a non-visible state as appropriate). By detecting not only the presence or absence of the eye region but also the direction of the line of sight, it is possible to more accurately determine whether the state is the visible state or the non-visible state.
  • the line of sight of the eye included in the image is directed toward the front camera 13A, it is determined to be in the visible state, and when the line of sight of the eye included in the image is not directed to the front camera 13A, it is determined to be in the non-visible state.
  • whether or not the object is in the visible state may be determined by applying a learning model learned in advance by machine learning.
  • determination of the visible state or the non-visible state may be performed by applying a learning model obtained by learning the eye shape in the visible state.
  • the viewing state detection process is performed by the detection unit 139 in this modified example, but may be performed by another functional block such as the system controller 131 . Then, the process proceeds to step ST33.
  • step ST33 it is determined whether or not the degree of certainty of the visibility state is equal to or greater than a threshold. Such determination is made by the detection unit 139, for example. For example, if the visible state lasts for several seconds or if the eye area reflected in the image is large, the confidence level is high, and if the visible state lasts for a short time or the eye area reflected in the image is small, the confidence level is high. become smaller. Then, the detection unit 139 notifies the system controller 131 of the determination result. The system controller 131 that has received the determination result notifies the application processor 101A that the user U is in the visible state when the certainty of the visible state is equal to or greater than the threshold.
  • the system controller 131 notifies the application processor 101A that the user U is in the non-visible state.
  • the processing up to this point is performed by the image sensor 104B. If the user U is in the visible state (Yes), the process proceeds to step ST35.
  • step ST35 is the obstacle detection processing (application for obstacle detection) performed by the application processor 101A.
  • step ST35 the application processor 101A turns on the display 12 because the user U is in the viewing state. Then, the process proceeds to step ST36.
  • step ST36 the application processor 101A performs control to activate the acceleration sensor that constitutes the sensor 111 in this modified example. Then, the process proceeds to step ST37.
  • step ST37 the application processor 101A detects whether or not the user U is in a walking state based on the sensing result of the acceleration sensor. If the user U is walking, the process proceeds to step ST38.
  • step ST38 after the rear camera 13B is activated, the normal mode of the rear camera 13B starts to shoot.
  • An image captured by the rear camera 13B in normal mode is supplied to the camera signal processing section 140 .
  • the camera signal processing unit 140 performs processing for detecting whether or not an obstacle is included in the image.
  • the camera signal processing unit 140 performs processing for detecting the presence or absence of an obstacle on the road included in the image. Through such processing, monitoring regarding the presence or absence of an obstacle using the rear camera 13B is started.
  • the application processor 101A is notified to that effect.
  • the application processor 101A that has received the notification notifies the user U of the presence of the obstacle using sound, display, vibration, a combination thereof, or the like.
  • step ST33 determines whether the determination result of step ST33 is the non-visible state (No). If the determination result of step ST33 is the non-visible state (No), the process proceeds to step ST39.
  • step ST39 the application processor 101A turns off the display 12 because the user U is in the non-visible state. If the display 12 is in the off state, the off state is continued. Then, the process proceeds to step ST40.
  • step ST40 since the user U is in the non-visible state and there is no need to perform obstacle detection processing, the application processor 101A performs control to stop the operation of the acceleration sensor. When the operation of the acceleration sensor is stopped, the application processor 101A continues the stopped state of the acceleration sensor. Then, the process proceeds to step ST41.
  • step ST41 since there is no need to perform obstacle detection processing, imaging by the rear camera 13B ends, and monitoring regarding the presence or absence of obstacles ends. If the determination result in step ST37 is No, that is, if the vehicle is not walking, there is no need to perform the obstacle detection process, so the process proceeds to step ST41 and the imaging by the rear camera 13B ends.
  • the display control for the display 12 and the control for the acceleration sensor may be switched in order or may be performed in parallel.
  • the process of detecting an obstacle based on an image obtained by imaging in the normal mode may be performed by the application processor 101A instead of the camera signal processing section 140.
  • the content displayed on the display 12 is not limited to the map, and may be an image of the other party. This modification can be applied even when the user U walks while operating the smartphone 100 related to urgent contact.
  • the visual recognition state detection process is performed based on the imaging result with low power consumption. Therefore, for example, power consumption in the smartphone 100 can be suppressed even if the visual recognition state detection process is performed all the time or for a long time after the power of the smartphone 100 is turned on. Further, since the visual recognition state detection processing is performed within the image sensor 104B, the amount of data transferred between the image sensor 104B and the application processor 101A can be reduced, and power consumption can be reduced. In addition, since the application processor 101A executes a predetermined application when it is detected that the user U is in the visible state, unnecessary execution of the application can be prevented.
  • FIG. 12 is a diagram showing an appearance example of an imaging device (imaging device 100A) according to this modification.
  • the imaging device 100A has, for example, a substantially rectangular parallelepiped shape.
  • the imaging device 100A also has a display 12A on one main surface.
  • the imaging device 100A has a first camera 13C provided on the same surface as the display 12A, and a second camera 13D provided on a side perpendicular to the display 12A.
  • the first camera 13C and the second camera 13D have an imaging device 104B.
  • the imaging device 100A has a configuration that can be attached to a moving object such as a car, a bicycle, or a motorcycle using an appropriate attachment member.
  • the imaging device 100A has, for example, functions similar to those of the smartphone 100, but may have functional differences.
  • FIGS. 13A to 13C are diagrams for schematically explaining the contents of this modification.
  • the examples shown in FIGS. 13A to 13C are examples in which the imaging device 100A is attached to a bicycle. For example, when the power is turned on, the imaging device 100A starts imaging in the visual recognition state detection mode using the first camera 13C.
  • the user U looks at the display 12A while the bicycle is stopped and confirms the route of the bicycle. It is detected based on the image obtained by the first camera 13C that the user U is in the visible state.
  • the user U starts the bicycle while confirming the route (while looking at the display 12A). Since the user U is in the visible state and the user U has moved, after the second camera 13D is activated, the second camera 13D takes an image in the normal mode.
  • the above-described obstacle detection processing is performed based on the image obtained by the second camera 13D. When an obstacle is detected in front of the user U, the user U is notified of this fact. During this time as well, imaging in the visual recognition state detection mode using the first camera 13C is continued.
  • the imaging device 100A determines that the obstacle detection process does not need to be performed because it is considered that the user U is facing the front without looking at the smartphone 100 . Based on such determination, the imaging by the second camera 13D ends.
  • the first camera 13C captures an image in the visual recognition state detection mode and the visual recognition state detection process using the imaging result. continues to take place.
  • FIG. 14 is a flowchart showing the flow of processing according to this modified example. Since the contents of steps ST41 to ST43 are the same as the processes of steps ST31 to ST33 in Modification 1, except that the front camera 13A is the first camera 13C, redundant description will be omitted.
  • step ST44 is the obstacle detection process (application for obstacle detection) performed by the application processor 101A.
  • step ST44 the application processor 101A turns on the display 12 because the user U is in the viewing state. Then, the process proceeds to step ST45.
  • the application processor 101A determines whether or not the bicycle on which the user U is riding is running. For example, the application processor 101A determines whether the bicycle on which the user U is riding is running based on the sensing results of the always-on GPS (an example of the position sensor 110) and the speed sensor used for recording the running log. determine whether or not If the bicycle on which user U is riding is running, the process proceeds to step ST46.
  • step ST46 after the second camera 13D is activated, the second camera 13D starts imaging in the normal mode.
  • An image captured by the second camera 13 ⁇ /b>D in the normal mode is supplied to the camera signal processing section 140 .
  • the camera signal processing unit 140 performs processing for detecting whether or not an obstacle is included in the image.
  • the camera signal processing unit 140 performs processing for detecting the presence or absence of an obstacle on the road included in the image. Obstacle monitoring using the second camera 13D is started by such processing.
  • the application processor 101A is notified to that effect.
  • the application processor 101A that has received the notification notifies the user U of the presence of the obstacle using sound, display, vibration, a combination thereof, or the like.
  • step ST43 determines whether the determination result of step ST43 is the non-visible state (No). If the determination result of step ST43 is the non-visible state (No), the process proceeds to step ST47.
  • step ST47 the application processor 101A turns off the display 12 because the user U is in the non-visible state. If the display 12 is in the off state, the off state is continued. Then, the process proceeds to step ST48.
  • step ST48 since the user U has become invisible, the application processor 101A determines that there is no need to perform obstacle detection processing. Based on such determination, the application processor 101A performs control to stop imaging by the second camera 13D. Also, if the determination result in step ST45 is No, that is, if the vehicle is not running, there is no need to perform the obstacle detection process, so the process proceeds to step ST48 and the imaging by the second camera 13D ends.
  • This modification can also provide the same effects as those of modification 1 described above.
  • the first camera 13C captures an image in the visual recognition state detection mode
  • the second camera 13D captures an image for performing the obstacle detection process.
  • the camera may be reversed.
  • FIG. 15 is a diagram showing an appearance example of a portable device according to this modification.
  • the portable device according to this modification is a watch-type portable device 100B.
  • the mobile device 100B has a display 12C and a camera 13F provided above the display 12C.
  • the mobile device 100B has the same functions and configuration as the smart phone 100, for example.
  • FIG. 16 is a flow chart showing the flow of processing performed by the mobile device 100B according to this modified example.
  • the processing surrounded by the upper dotted line in FIG. 16 is the processing performed by the imaging element 104B, and the processing surrounded by the lower dotted line is the processing performed by the application processor 101A.
  • the mobile device 100B is controlled to be in a locked state in which no operation is accepted.
  • the transition to the locked state is performed by the control unit 101 of the mobile device 100B, for example. Then, the process proceeds to step ST52.
  • steps ST52 to ST54 are the same as the processes of steps ST31 to ST33 in Modification 1, except that the front camera 13A is the camera 13F, so redundant description will be omitted.
  • the determination process of step ST54 is No, a process returns to step ST52.
  • step ST55 The processing after step ST55 is, for example, an application (face authentication processing) that performs face authentication using the camera 13F, which is executed by the application processor 101A.
  • step ST55 the imaging mode of the camera 13F is changed from the visible state detection mode to the normal mode under the control of the application processor 101A.
  • the system controller 131 of the imaging device 104B may autonomously switch the imaging mode from the visual recognition state detection mode to the normal mode.
  • the application processor 101A turns on the display 12C. Then, the process proceeds to step ST56.
  • a high-quality image is obtained by normal mode imaging by the camera 13F.
  • the camera signal processing unit 140 performs face authentication processing.
  • a known process can be applied as the face authentication process.
  • a learning model obtained by machine learning is used to perform face authentication processing for detecting whether or not a face included in an image is a registered face. Since a high-quality image can be obtained by imaging in the normal mode of the camera 13F, face authentication can be performed accurately. If face authentication is successful in the process of step ST56 (Yes), the process proceeds to step ST57.
  • step ST57 the mobile device 100B is unlocked because the authentication is successful. Then, the process proceeds to step ST58 and the process ends. If the face authentication is not established in the process of step T56 (if No), the process returns to step ST52, and the processes after step ST52 are repeated.
  • the face authentication process is performed when the movement of the subject is detected based on the image obtained by the camera 13F, the face authentication process will be performed even if the movement of a person other than the user U is detected. , power may be wasted. Further, when a person is detected based on an image obtained by the camera 13F and face authentication processing is performed with the detection of the person as a trigger, the face authentication processing is performed even when the user U is not looking at the mobile device 100B, and power consumption is reduced. is wasted. In this modified example, face authentication processing is performed with the user U looking at the portable device 100B, that is, in the visible state, as a trigger, so unnecessary face authentication processing can be suppressed. As a result, power consumption of the portable device 100B can be suppressed. In addition, since the application for performing face authentication is started and executed without the user U performing an operation, convenience for the user U can be improved.
  • a watch-shaped portable device has been described as an example of the portable device 100B, but this modified example can also be applied to electronic devices such as smartphones.
  • the electronic device according to this modification is a television device 100C.
  • the television device 100C has, for example, a display 12D, and further has a camera 13G provided at a predetermined position (for example, near the upper center) of a frame surrounding the display 12D.
  • the television device 100C has an internal configuration similar to that of a known television device.
  • the camera 13G has an imaging device 104B and is capable of executing viewing state detection processing. For example, when control is performed to turn on the main power supply of the television device 100C (for example, when the television device 100C is connected to a commercial power supply), visual recognition state detection processing using the camera 13G is started.
  • the viewing state of the user U is detected by the same viewing state detection processing as in the first modification. be.
  • the television device 100C is turned on.
  • the imaging mode of the camera 13G transitions from the visual recognition state detection mode to the normal mode.
  • Such transition control may be performed autonomously by the imaging element 104B, or may be performed under the control of the control unit of the television apparatus 100C.
  • the application processor 101A of the television device 100C performs a process of authenticating the face of the user U based on the image obtained by imaging in the normal mode. Since a high-quality image obtained by imaging in the normal mode is used, face authentication processing can be performed accurately.
  • Face authentication processing includes, for example, processing for matching the face of user U with the face of a registered user. If the user U is a registered user, the settings of the television device 100C are controlled to correspond to the registered user. Then, according to the settings, the content via the television broadcast or the network is reproduced. If the user U is not a registered user, default settings are used to reproduce television broadcasts and content via networks. Settings of the television apparatus 100C include volume, brightness, luminance, color tone, and the like. Note that after the setting process of the television apparatus 100C is completed, for example, the imaging mode of the camera 13G is switched from the normal mode to the visual recognition state detection mode, and imaging is performed in the visual recognition state detection mode.
  • the power of the television device 100C is controlled to be turned off. be done.
  • a method is also conceivable in which the presence or absence of a person is detected by a human sensor, and the power of the television device 100C is controlled on/off according to the detection result.
  • the power of the television device 100C is controlled to be turned on. Electric power may be wasted.
  • the power of the television device 100C is not controlled to be turned on, thereby preventing the above-described inconvenience from occurring.
  • a relatively large display device 100D is installed in a public space or the like.
  • the display device 100D has a display 12H.
  • the display device 100D also has a camera 13H provided substantially in the center of the upper frame.
  • the display device 100D has functions similar to those of the smart phone 100, for example.
  • the camera 13H takes an image in the viewing state detection mode.
  • Visual recognition state detection processing is performed in the display device 100D based on an image obtained by imaging.
  • Content reproduction control is performed according to the detection result.
  • the viewing state is not detected as a result of the viewing state detection process, that is, when there is no user looking at the display 12H
  • the screen of the display 12H is controlled to be turned off.
  • the standby content may be reproduced on the display 12H instead of turning off the screen of the display 12H.
  • the viewing state is detected as a result of the viewing state detection process, that is, when there is a user viewing the display 12H
  • the main content is reproduced on the display 12H.
  • the main content is, for example, preset content.
  • the viewing state detection process is also performed while the main content is being reproduced on the display 12H.
  • the user U who has been viewing the main content has stopped viewing the display 12H, so control is performed to turn off the display 12H.
  • standby content may be played. Note that the display 12H is turned off even when the reproduction of the main content is finished. The standby content may be played instead of turning off the display 12H.
  • the operation mode of the camera 13H may transition from the visual recognition state detection mode to the normal mode.
  • the visual recognition state detection process is performed by the process from step ST61 to step ST63.
  • the content of the viewing state detection process is the same as the process from step ST31 to step ST33 in Modification 1, except that the camera 13H is used instead of the front camera 13A.
  • the determination process of step ST63 is No, a process returns to step ST61.
  • step ST63 When the determination in step ST63 is Yes, that is, when the visual recognition state is detected, the mode of the camera 13H transitions from the visual recognition state detection mode to the normal mode. Then, the process proceeds to step ST64.
  • the application processor 101A performs an application for recognizing attributes of the user U viewing (browsing) the display 12H. For example, the application processor 101A performs attribute recognition processing for recognizing attributes of the user U based on an image captured by the camera 13H in normal mode.
  • the method of attribute recognition processing is not limited to a specific method, for example, there is a method of applying a learning model obtained by DNN. After the attribute of user U is recognized, the process proceeds to step ST65.
  • an attribute recording process for recording user U's attributes is performed.
  • the attributes of user U are recorded in an appropriate memory within display device 100D.
  • the attributes of the user U include gender, age, etc., which are the attributes of the user U themselves, and the time spent viewing the main content.
  • the attributes may include attributes that are not limited to a specific user U (for example, the number of times the main content has been viewed, etc.).
  • the attributes obtained by the attribute recognition process may be transmitted to, for example, a server on the network without being recorded.
  • control corresponding to recognized attributes may be included in the application executed by the application processor 101A. More specifically, content reproduction control according to recognized attributes and setting processing related to content reproduction control may be included.
  • the age-appropriate main content may be reproduced.
  • an animation is displayed on the display 12H.
  • control is performed to increase the volume, which is one of the setting processes related to content reproduction control.
  • the luminance, color tone, brightness, character size, etc. of the display 12H may be adjusted according to the attribute of the user U recognized.
  • the attribute recognition process by performing the attribute recognition process, it is possible to acquire the age group and degree of interest of the user viewing the content (for example, an advertisement) displayed on the display 12H. be able to analyze the effects.
  • the attribute recognition process consumes a predetermined amount of power or more, in the present modification, the attribute recognition process is performed when the visible state is detected, so power consumption in the display device 100D can be suppressed.
  • this modification can also be applied to paper media, such as postings on bulletin boards.
  • notices are posted on a bulletin board 71 .
  • An imaging device 100I is attached near the center of the upper frame of the bulletin board 71 .
  • the imaging device 100I has, for example, a box shape and has a camera 13I on one predetermined surface.
  • the imaging device 100I is a device that can be attached to an appropriate location on the bulletin board 71 using a thumbtack or the like, and has functions similar to those of the smartphone 100, for example.
  • Visual recognition state detection processing is performed based on an image captured by the camera 13I in the recognition state detection mode.
  • the imaging mode of the camera 13I transitions from the visual recognition state detection mode to the normal mode.
  • the attribute recognition process described above is performed based on the image obtained by imaging in the normal mode.
  • the attribute recognition processing for example, the attribute of the user who viewed the notice, the number of times the notice was viewed, and the like are obtained. The information obtained can be used to analyze the effectiveness of the post and the degree of interest in the post.
  • the postings in the above explanation may be exhibits.
  • a robot 72 which is an example of an exhibit, is placed on the exhibit device 100J.
  • the display device 100J has a columnar portion 81 and a sheet portion 82 .
  • the sheet portion 82 can be wound around the columnar portion 81, for example.
  • a camera 13J is provided substantially in the center of the columnar portion 81 .
  • Visual recognition state detection processing is performed based on an image captured by the camera 13J in the recognition state detection mode.
  • the imaging mode of the camera 13J transitions from the visual recognition state detection mode to the normal mode.
  • the attribute recognition process described above is performed based on the image obtained by imaging in the normal mode.
  • the interest level such as the attribute of the user who viewed the exhibit and the number of times the exhibit was viewed can be obtained. Using the obtained information, it is possible to analyze information such as what kind of users are interested in what kinds of exhibits.
  • the application executed by the application processor 101A corresponds to a part of the human body and a predetermined object, and includes information based on image data obtained by the imaging element 104B and sensors different from the imaging element 104B (for example, the position sensor 110 and the like). It may be determined based on at least one of the information obtained from the sensor 111).
  • Information based on the image data obtained by the imaging device 104B includes, for example, motion vector information.
  • the information obtained from the position sensor 110 and the sensor 111 includes, for example, at least one of position information, information about tilt of the smartphone 100, information about illuminance, and time information.
  • a combination of a part of the human body and a predetermined object for starting one application and another application may be similar.
  • an alarm may be set, or the operation mode of the smartphone 100 may be set.
  • the information obtained from the sensor 111 it is possible to activate an application that meets the user's intention. For example, when the illuminance is low, when the time information is at night, when the location information indicates home, when the smartphone 100 is placed, and when a finger gesture is made, an alarm for the user to wake up is generated. likely to set. Therefore, the application to be activated is determined as an alarm.
  • the application to be activated is determined as the mode setting control of the smart phone 100 . Such determination is made by the application processor 101A connected to the sensor 111, for example. Further, when a medicine tablet is detected, and further, when the movement vector detects that the position of the medicine has moved back and forth or up and down, in other words, when an action of taking medicine is detected, the ingestion of the medicine is detected. An application for registering the history may be activated. Further, based on the location information, the place of taking medicine may be registered.
  • a screen prompting the user to make a selection may be displayed on the display 12 .
  • the application may differ from the user's intention. In such a case, the launched application may be canceled.
  • Cancellation may be performed by voice input, or may be performed by gesture for cancellation.
  • a gesture for cancellation may be detected by imaging in the detection mode.
  • a notification may be made to confirm whether or not the application meets the user's intention.
  • a notification requesting an additional action may be made when executing the application.
  • the request for additional action is, for example, a message such as "If you are allowed to register, please touch the confirm button.” Such additional action request may be displayed on the display 12, or may be notified by sound or vibration.
  • the additional action may be a movement in the direction of time, an action of drawing a predetermined pattern in the air, or an action for fingerprint authentication, voiceprint authentication, password authentication, etc. good.
  • the additional operation is preferably an operation known only to the user. Examples of motions in the time direction include gestures that change over time (e.g. rock, paper, scissors, paper), and objects corresponding to predetermined objects (e.g. medicine) moving vertically or in perspective. It may be an operation to move.
  • part of the processing performed by the detection unit 139 and the system controller 131 may be performed outside the imaging device 104B.
  • part of the processing performed by the detection unit 139 and the system controller 131 may be performed by the control unit 101 and the application processor 101A. In this way, it is possible to appropriately change which processing is performed by which functional block.
  • Details of a part of the human body photographed in recognition mode may be recognized by the system controller 131 . Details of parts of the human body include facial expressions and wrinkles on the hands and face. An application to be activated may be determined in consideration of these.
  • the imaging element 104B performs imaging in any one of the moving object detection mode, detection mode, recognition mode, and normal mode
  • moving object detection may be performed based on image data obtained in the detection mode without the moving object detection mode.
  • the application to be activated may be determined based on the image data in the detection mode.
  • the processing according to the present disclosure can be configured as a control method or a program that causes a computer to execute the control method.
  • the program can be distributed via a server or the like and installed in an electronic device such as a smart phone.
  • the content of one embodiment and the content described in the modification can be appropriately combined without departing from the gist of the present disclosure.
  • the present disclosure can also be configured as follows.
  • an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode; and an application processor that executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode.
  • the predetermined object includes a part of a human body different from the part of the human body.
  • the application processor recognizes the part of the human body as an operation instruction for the predetermined object, and executes the application according to the recognition result.
  • the electronic device (9) The electronic device according to (8), wherein the human body part and the different human body part are a right hand part and a left hand part, respectively. (10) Execution of an application that corresponds to the part of the human body and the predetermined object and is determined based on at least one of information based on image data obtained by the imaging device and information obtained from a sensor different from the imaging device.
  • the electronic device according to any one of (1) to (9).
  • (11) (10) The electronic device according to (10), wherein the information obtained from the sensor includes at least one of information on inclination of the electronic device, information on illuminance, position information, and time information. (12) The electronic device according to any one of (1) to (11), wherein a notification requesting an additional operation is made when executing the application.
  • the additional operation is an operation for authentication.
  • the power consumption in the second mode becomes smaller than the power consumption in the first mode by making the imaging parameter for the imaging in the second mode smaller than the imaging parameter for the imaging in the first mode.
  • the electronic device performs imaging in the first mode or in a second mode that consumes less power than the first mode, an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode; Method.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode;
  • a program that makes a computer perform a method.
  • an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode, and detects whether or not the imaging device is in a visible state according to imaging results in the second mode; , and an application processor that executes a predetermined application when the result of the detection is the visible state.
  • the electronic device Having another imaging device different from the imaging device, (25) The electronic device according to (25), wherein the predetermined application includes a process of detecting an obstacle based on the image obtained from the other imaging device.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death, A control method in which an application processor executes a predetermined application when a result of the detection is the visible state.
  • the imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death, A program that causes a computer to execute a control method in which an application processor executes a predetermined application when the result of the detection is the visible state.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Power Sources (AREA)
  • Position Input By Displaying (AREA)

Abstract

The present invention provides, for example, an electronic apparatus that improves operability while suppressing power consumption. According to the present invention, an electronic apparatus has an imaging element that performs imaging in a first mode or a second mode that has lower power consumption than the first mode and an application processor that executes an application that corresponds to a portion of the human body and a prescribed object when the portion of the human body and the prescribed object have been detected by imaging by the imaging element in the second mode.

Description

電子機器、制御方法及びプログラムElectronic device, control method and program
 本開示は、電子機器、制御方法及びプログラムに関する。 The present disclosure relates to electronic equipment, control methods, and programs.
 電子機器で消費される電力は極力小さいことが好ましいことから、電子機器の低消費電力を実現するための種々の技術が提案されている。例えば、下記特許文献1には、低消費電力でセンシング中に顔等の参照物が検出された場合に、高消費電力での動作に移行する電子機器が記載されている。 Since it is desirable for electronic devices to consume as little power as possible, various techniques have been proposed to achieve low power consumption in electronic devices. For example, Patent Literature 1 below describes an electronic device that shifts to high power consumption operation when a reference object such as a face is detected during low power consumption sensing.
特開2020-145714号公報JP 2020-145714 A
 特許文献1に記載の技術では、高消費電力での動作に移行した後に、ユーザーが、例えばカメラ撮影や情報登録を行うアプリケーションを立ち上げる操作を必要としていた。すなわち、特許文献1に記載の技術では、電子機器での消費電力の抑制及び操作性の向上の両立が困難であった。 With the technology described in Patent Document 1, after shifting to high power consumption operation, the user is required to launch an application for camera shooting or information registration, for example. That is, with the technique described in Patent Document 1, it is difficult to achieve both suppression of power consumption and improvement of operability in electronic devices.
 本開示は、例えば消費電力を抑制しつつ、アプリケーションを実行する際の操作性を向上させることができる電子機器、制御方法及びプログラムを提供することを目的の一つとする。 One object of the present disclosure is to provide an electronic device, a control method, and a program that can improve operability when executing an application while reducing power consumption, for example.
 本開示は、例えば、
 第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行う撮像素子と、
 撮像素子の第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、人体の一部及び所定のオブジェクトに対応するアプリケーションを実行するアプリケーションプロセッサと
 を有する電子機器である。
The present disclosure, for example,
an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode;
an application processor that executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging in the second mode of the imaging device; be.
 本開示は、例えば、
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
 アプリケーションプロセッサが、撮像素子の第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
 制御方法である。
The present disclosure, for example,
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
A control method in which an application processor executes an application corresponding to a part of the human body and a predetermined object when the part of the human body and the predetermined object are detected by imaging in the second mode of the imaging device. .
 本開示は、例えば、
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
 アプリケーションプロセッサが、撮像素子の第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
 制御方法をコンピュータに実行させるプログラムである。
The present disclosure, for example,
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
The application processor executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the image sensor in the second mode. It is a program to be executed by
 本開示は、例えば、
 第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、第2のモードでの撮像結果に応じて視認状態であるか否かを検出する撮像素子と、
 検出の結果が視認状態である場合に、所定のアプリケーションを実行するアプリケーションプロセッサと
 を有する電子機器である。
The present disclosure, for example,
an imaging element that performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode;
and an application processor that executes a predetermined application when the result of detection is a visible state.
 本開示は、例えば、
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
 アプリケーションプロセッサが、検出の結果が視認状態である場合に、所定のアプリケーションを実行する
 制御方法である。
The present disclosure, for example,
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. ,
A control method in which an application processor executes a predetermined application when the result of detection is a visible state.
 本開示は、例えば、
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
 アプリケーションプロセッサが、検出の結果が視認状態である場合に、所定のアプリケーションを実行する
 制御方法をコンピュータに実行させるプログラムである。
The present disclosure, for example,
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. ,
An application processor is a program that causes a computer to execute a control method for executing a predetermined application when the result of detection is the visible state.
一実施形態に係るスマートフォンの外観例を示す図である。It is a figure showing an example of appearance of a smart phone concerning one embodiment. 一実施形態に係るスマートフォンの内部構成例を示す図である。It is a figure which shows the internal structural example of the smart phone which concerns on one Embodiment. 一実施形態に係る撮像素子の構成例を示す図である。It is a figure which shows the structural example of the image pick-up element which concerns on one Embodiment. 一実施形態に係るスマートフォンの動作例を説明するためのフローチャートである。4 is a flowchart for explaining an operation example of a smart phone according to one embodiment; 人体の一部及び所定のオブジェクトが検出された場合に起動されるアプリケーションの一例を説明する際に参照される図である。FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected; 人体の一部及び所定のオブジェクトが検出された場合に起動されるアプリケーションの一例を説明する際に参照される図である。FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected; 人体の一部及び所定のオブジェクトが検出された場合に起動されるアプリケーションの一例を説明する際に参照される図である。FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected; 人体の一部及び所定のオブジェクトが検出された場合に起動されるアプリケーションの一例を説明する際に参照される図である。FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected; 人体の一部及び所定のオブジェクトが検出された場合に起動されるアプリケーションの一例を説明する際に参照される図である。FIG. 4 is a diagram referred to when describing an example of an application that is activated when a part of a human body and a predetermined object are detected; 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification. 変形例を説明するための図である。It is a figure for demonstrating a modification.
 以下、本開示の実施形態等について図面を参照しながら説明する。なお、説明は以下の順序で行う。
<本開示で考慮すべき問題>
<一実施形態>
<変形例>
 なお、以下に説明する実施形態等は本開示の好適な具体例であり、本開示の内容がこれらの実施形態等に限定されるものではない。また、同一又は同質の構成については同一の参照符号を付し、重複した説明を適宜、省略する。
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. The description will be given in the following order.
<Issues to be considered in this disclosure>
<One embodiment>
<Modification>
The embodiments and the like described below are preferred specific examples of the present disclosure, and the content of the present disclosure is not limited to these embodiments and the like. In addition, the same reference numerals are given to the same or similar configurations, and redundant explanations are omitted as appropriate.
<本開示で考慮すべき問題>
 始めに、本開示の理解を容易とするために、本開示で考慮すべき問題について説明する。
<Issues to be considered in this disclosure>
First, in order to facilitate understanding of the present disclosure, issues to be considered in the present disclosure will be described.
 スマートフォンやスマートウォッチ(時計型の携帯端末)等の携帯端末は、一定の期間の操作入力がないと、ディスプレイの電源がオフされたり(以下、スリープ状態と適宜、称する)、所定のパスワードを入力しないと操作入力が受け付けられない状態(以下、ロック状態と適宜、称する)に遷移することが一般的である。また、近年、インターネット等のネットワークを介して多くのアプリケーションが配布されており、携帯端末に多くのアプリケーションをインストールできるようになっている。これらのアプリケーションとしては、決済を行うためのアプリケーション、撮像を行うためのアプリケーション、ゲームを行うためのアプリケーション、購入を行うためのアプリケーション、薬の使用履歴や歩行等の運動履歴、脈拍、体重、血圧等の生体情報を登録するアプリケーション、地図のアプリケーションを例示することができる。 Mobile terminals such as smartphones and smartwatches (clock-type mobile terminals) turn off the power of the display (hereinafter referred to as sleep mode) or input a predetermined password if there is no operation input for a certain period of time. Otherwise, it is common to transition to a state in which operation input cannot be accepted (hereinafter, appropriately referred to as a locked state). Moreover, in recent years, many applications have been distributed via networks such as the Internet, and it has become possible to install many applications on mobile terminals. These applications include applications for making payments, applications for taking pictures, applications for playing games, applications for making purchases, usage history of medicines, exercise history such as walking, pulse, weight, and blood pressure. An application for registering biometric information, such as a map application, can be exemplified.
 例えば携帯端末がロック状態であるときに、ユーザーがアプリケーションを起動させて実行する例を考える。ユーザーは、始めに携帯端末のロック状態を解除する操作を行う。例えば、ユーザーは、携帯端末に数字や文字からなるパスワードを入力する。携帯端末のロック状態が解除された後、ユーザーは、携帯端末にインストールされている多くのアプリケーションから、所望のアプリケーションを選択し、当該アプリケーションのアイコンに対してタッチ操作を行うことによりアプリケーションを起動させる。このように、一つのアプリケーションを起動させて実行するために、ユーザーが多くの操作を行う必要がある。ここで、特許文献1に記載の技術を利用して、低消費電力でのセンシング中に顔等の参照物が検出された場合にスリープ状態やロック状態を自動で解除することが考えられる。しかしながら、スリープ状態やロック状態を自動で解除できたとしても、アプリケーションの選択や実行のための操作が必要である点は変わりなく操作性をより改善することはできなかった。以上の点を踏まえつつ、本開示の一実施形態について詳細に説明する。 For example, consider an example where the user launches and executes an application when the mobile device is locked. The user first performs an operation to unlock the mobile terminal. For example, the user enters a password consisting of numbers and letters into the mobile terminal. After the mobile terminal is unlocked, the user selects a desired application from many applications installed on the mobile terminal and activates the application by performing a touch operation on the icon of the application. . As such, the user has to perform many operations in order to start and run one application. Here, it is conceivable to use the technology described in Patent Document 1 to automatically cancel the sleep state or lock state when a reference object such as a face is detected during sensing with low power consumption. However, even if the sleep state and lock state can be automatically released, the operation for selecting and executing the application is still required, and the operability cannot be further improved. Based on the above points, an embodiment of the present disclosure will be described in detail.
<一実施形態>
[スマートフォンの外観]
 本実施形態では、電子機器の一例としてスマートフォンを例にして説明する。勿論、電子機器として、タブレット型のコンピュータやスマートウォッチ等の他の携帯端末を用いることも可能である。
<One embodiment>
[Appearance of smartphone]
In this embodiment, a smartphone will be described as an example of an electronic device. Of course, it is also possible to use other mobile terminals such as tablet computers and smart watches as electronic devices.
 図1は、本実施形態に係るスマートフォン(スマートフォン100)の外観例を示す図である。スマートフォン100は、筐体11を有する。筐体11の一方の主面には、ディスプレイ12が設けられている。また、ディスプレイ12の例えば上側に、スマートフォン100のユーザー自身等を撮像するフロントカメラ13Aが設けられている。また、ディスプレイ12が設けられる主面とは反対側の主面には、リアカメラ13Bが設けられている。フロントカメラ13Aやリアカメラ13Bにより、人体の一部及び所定のオブジェクトが撮像され得る。筐体11の側面には、電源をオン/オフする等のボタン14が設けられている。 FIG. 1 is a diagram showing an example of the appearance of a smartphone (smartphone 100) according to this embodiment. Smartphone 100 has housing 11 . A display 12 is provided on one main surface of the housing 11 . A front camera 13</b>A that captures an image of the user of the smartphone 100 or the like is provided, for example, on the upper side of the display 12 . A rear camera 13B is provided on the main surface opposite to the main surface on which the display 12 is provided. A part of a human body and a predetermined object can be imaged by the front camera 13A and the rear camera 13B. A button 14 for turning on/off the power is provided on the side surface of the housing 11 .
[スマートフォンの内部構成例]
 図2は、本実施形態に係るスマートフォン100の内部構成例を示すブロック図である。スマートフォン100は、制御部101と、マイクロフォン102と、マイクロフォン102に接続される音声信号処理部103と、撮像ユニット104と、ネットワークユニット105と、ネットワークユニット105に接続されるネットワーク信号処理部106と、スピーカ107と、スピーカ107に接続される音声再生部108と、上述したディスプレイ12と、ディスプレイ12に接続される画面表示部109と、位置センサ110と、センサ111とを有している。音声信号処理部103、撮像ユニット104、ネットワーク信号処理部106、音声再生部108、画面表示部109、位置センサ110、及び、センサ111のそれぞれは、制御部101に対して接続されている。
[Example of internal configuration of a smartphone]
FIG. 2 is a block diagram showing an internal configuration example of the smartphone 100 according to this embodiment. The smartphone 100 includes a control unit 101, a microphone 102, an audio signal processing unit 103 connected to the microphone 102, an imaging unit 104, a network unit 105, a network signal processing unit 106 connected to the network unit 105, It has a speaker 107 , an audio reproduction unit 108 connected to the speaker 107 , the display 12 described above, a screen display unit 109 connected to the display 12 , a position sensor 110 and a sensor 111 . The audio signal processing section 103 , imaging unit 104 , network signal processing section 106 , audio reproduction section 108 , screen display section 109 , position sensor 110 and sensor 111 are each connected to the control section 101 .
 制御部101は、CPU(Central Processing Unit)等により構成されている。制御部101は、プログラムが格納されるROM(Read Only Memory)や、プログラムが実行される際のワークエリアとして用いられるRAM(Random Access Memory)等を有している(これらの図示は省略されている。)。制御部101は、スマートフォン100の動作を統括的に制御する。 The control unit 101 is composed of a CPU (Central Processing Unit) and the like. The control unit 101 has a ROM (Read Only Memory) in which the program is stored, a RAM (Random Access Memory) used as a work area when the program is executed, and the like (not shown). there is.). The control unit 101 comprehensively controls the operation of the smartphone 100 .
 制御部101は、機能ブロックとしてアプリケーションプロセッサ101Aを有している。アプリケーションプロセッサ101Aは、撮像ユニット104が有する撮像素子(後述する撮像素子104B)の第2のモードでの撮像(詳細は後述)により、人体の一部及び所定のオブジェクトが検出された場合に、人体の一部及び所定のオブジェクトに対応するアプリケーション(例えば、特定の処理を行うソフトウェア)を実行する。ここで、アプリケーションの実行は、少なくともアプリケーションの起動を含む。アプリケーションの実行に、起動に応じたアプリケーション毎の入力画面をディスプレイ12に表示する処理が含まれてもよいし、アプリケーションに応じた登録が自動で行われる制御が含まれてもよい。アプリケーション起動後のどこまでの処理をアプリケーションの実行に含ませるかは適宜、設定可能である。なお、スマートフォン100には、複数のアプリケーションが記憶されていてもよい。この場合、アプリケーションの実行に、複数のアプリケーションから所定のアプリケーションを選択する処理が含まれてもよい。 The control unit 101 has an application processor 101A as a functional block. When a part of a human body and a predetermined object are detected by imaging (details will be described later) in the second mode of an image sensor (image sensor 104B described later) of the imaging unit 104, the application processor 101A detects the human body. , and an application (for example, software that performs a specific process) corresponding to a predetermined object. Here, execution of the application includes at least activation of the application. The execution of the application may include processing for displaying an input screen for each application on the display 12 in response to activation, or may include control for automatically performing registration according to the application. It is possible to set as appropriate how much processing after the application is started is included in the execution of the application. Note that the smartphone 100 may store a plurality of applications. In this case, execution of the application may include a process of selecting a predetermined application from a plurality of applications.
 マイクロフォン102は、ユーザーの発話等を収音する。音声信号処理部103は、マイクロフォン102を介して収音された音の音声データに対して、公知の音声信号処理を行う。 The microphone 102 picks up the user's speech and the like. The audio signal processing unit 103 performs known audio signal processing on audio data of sounds picked up via the microphone 102 .
 撮像ユニット104は、例えば、レンズ等の光学系104A及び撮像素子104Bを含む。撮像素子104Bとしては、CMOS(Complementary Metal Oxide Semiconductor)センサやCCD(Charge Coupled Device)センサが適用され得る。本実施形態では、撮像素子104Bが信号処理回路104Cを有する。例えば、撮像素子104B及び信号処理回路104Cが積層された1チップのセンサとして構成されている。 The imaging unit 104 includes, for example, an optical system 104A such as a lens and an imaging element 104B. A CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor can be applied as the imaging element 104B. In this embodiment, the image sensor 104B has a signal processing circuit 104C. For example, it is configured as a one-chip sensor in which the imaging device 104B and the signal processing circuit 104C are stacked.
 ネットワークユニット105は、アンテナ等を含む。ネットワーク信号処理部106は、ネットワークユニット105を介して送受信されるデータに対して、変復調処理やエラー訂正処理等を行う。 The network unit 105 includes an antenna and the like. A network signal processing unit 106 performs modulation/demodulation processing, error correction processing, and the like on data transmitted and received via the network unit 105 .
 音声再生部108は、スピーカ107から音を再生するための処理を行う。音声再生部108は、例えば、増幅処理やD/A変換処理等の公知の音声信号処理を行う。 The audio reproduction unit 108 performs processing for reproducing sound from the speaker 107 . The audio reproduction unit 108 performs known audio signal processing such as amplification processing and D/A conversion processing, for example.
 ディスプレイ12としては、LCD(Liquid Crystal Display)や有機EL(Electro Luminescence)ディスプレイが適用され得る。画面表示部109は、ディスプレイ12に各種の情報を表示するための公知の処理を行う。例えば、画面表示部109は、アプリケーションプロセッサ101Aの制御に応じて、ディスプレイ12にアプリケーションに対応するUIを表示するための処理を行う。なお、ディスプレイ12はタッチパネルとして構成されていても良い。この場合には、画面表示部109は、タッチ操作に伴う操作位置を検出処理等も行う。 As the display 12, an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display can be applied. The screen display unit 109 performs known processing for displaying various information on the display 12 . For example, the screen display unit 109 performs processing for displaying a UI corresponding to the application on the display 12 under the control of the application processor 101A. Note that the display 12 may be configured as a touch panel. In this case, the screen display unit 109 also performs detection processing of the operation position associated with the touch operation.
 位置センサ110は、例えば、GNSS(Global Navigation Satellite System)と称されるシステムを利用して、現在位置の測位を行う測位部である。 The position sensor 110 is a positioning unit that measures the current position using, for example, a system called GNSS (Global Navigation Satellite System).
 センサ111は、撮像素子104Bや位置センサ110以外のセンサを総称したものである。センサ111としては、例えば、スマートフォン100の動きや状態を検出するセンサが挙げられる。具体的には、センサ111としては、加速度センサ、ジャイロセンサ、電子コンパスなどが挙げられる。また、センサ111に、周囲の環境を検出するセンサが含まれてもよい。これらのセンサとしては、温度を計測するセンサ、湿度を計測するセンサ、気圧を計測するセンサ、周囲の明るさ(照度)を計測するセンサなどが挙げられる。また、センサ111に、スマートフォン100のユーザーの生体情報を検出するセンサが含まれてもよい。これらのセンサとしては、ユーザーの血圧、脈拍、汗腺、体温、指紋を検出するセンサが挙げられる。 The sensor 111 is a general term for sensors other than the image sensor 104B and the position sensor 110. As the sensor 111 , for example, a sensor that detects the movement and state of the smartphone 100 can be used. Specifically, the sensor 111 includes an acceleration sensor, a gyro sensor, an electronic compass, and the like. Sensors 111 may also include sensors that detect the surrounding environment. These sensors include a sensor that measures temperature, a sensor that measures humidity, a sensor that measures atmospheric pressure, a sensor that measures ambient brightness (illuminance), and the like. Further, the sensor 111 may include a sensor that detects biometric information of the user of the smart phone 100 . These sensors include sensors that detect the user's blood pressure, pulse, sweat glands, body temperature, and fingerprints.
[撮像素子の構成例]
 次に、図3を参照しながら、本実施形態に係る撮像素子104Bの構成例について説明する。撮像素子104Bは、フォトダイオード等の受光素子が2次元的に配列された画素アレイ部121を有する。画素アレイ部121は、受光素子に接続される、水平走査回路、垂直走査回路、A/D(Analog to Digital)回路等を含む(これらの図示は省略している。)。
[Configuration example of imaging device]
Next, a configuration example of the imaging element 104B according to this embodiment will be described with reference to FIG. The imaging element 104B has a pixel array section 121 in which light receiving elements such as photodiodes are arranged two-dimensionally. The pixel array section 121 includes a horizontal scanning circuit, a vertical scanning circuit, an A/D (Analog to Digital) circuit, etc., which are connected to the light receiving elements (not shown).
 また、撮像素子104Bは、信号処理回路104Cとして、システムコントローラ131、パワー制御部132、クロック生成回路133、クロック生成回路133と接続されるリングオシレータ134、PLL(Phase Locked Loop)135、BIAS136、タイミングジェネレータ137、センサI/F(Interface)138、検出部139、カメラ信号処理部140、及び、出力I/F141を有する。システムコントローラ131、パワー制御部132、タイミングジェネレータ137、検出部139、カメラ信号処理部140、及び、出力I/F141が、バス151を介して互いに接続されている。 In addition, the image sensor 104B includes, as a signal processing circuit 104C, a system controller 131, a power control unit 132, a clock generation circuit 133, a ring oscillator 134 connected to the clock generation circuit 133, a PLL (Phase Locked Loop) 135, a BIAS 136, a timing It has a generator 137 , a sensor I/F (Interface) 138 , a detection section 139 , a camera signal processing section 140 and an output I/F 141 . The system controller 131 , power control section 132 , timing generator 137 , detection section 139 , camera signal processing section 140 and output I/F 141 are connected to each other via a bus 151 .
 システムコントローラ131は、例えばマイクロプロセッサを有し、撮像素子104Bにおける各部の動作を統括的に制御する。システムコントローラ131は、出力I/F131Aを有する。出力I/F131Aを介して、システムコントローラ131と制御部101(例えば、アプリケーションプロセッサ101A)との間でコマンドの送受信が行われる。コマンドの送受信は、例えばIC等のシリアルインターフェースを介して行われる。 The system controller 131 has, for example, a microprocessor, and centrally controls the operation of each unit in the imaging device 104B. The system controller 131 has an output I/F 131A. Commands are transmitted and received between the system controller 131 and the control unit 101 (for example, the application processor 101A) via the output I/F 131A. Commands are transmitted and received via a serial interface such as I2C .
 パワー制御部132は、各部に供給される電力を制御する。詳細は後述するが、パワー制御部132は、撮像素子104Bの動作モードに応じて、各部に供給される電力を制御する。 The power control unit 132 controls power supplied to each unit. Although the details will be described later, the power control unit 132 controls power supplied to each unit according to the operation mode of the imaging element 104B.
 クロック生成回路133は、リングオシレータ134及びPLL135の出力に基づいて、クロック信号を生成する。クロック生成回路133により生成されたクロック信号が撮像素子104Bの各部に供給され、クロック信号に基づく動作が行われる。 The clock generation circuit 133 generates clock signals based on the outputs of the ring oscillator 134 and the PLL 135 . A clock signal generated by the clock generating circuit 133 is supplied to each part of the imaging element 104B, and operations are performed based on the clock signal.
 BIAS136は、画素アレイ部121の制御や出力信号の処理を行う各回路に対し、安定的な基準電圧と基準電流を生成し供給する。 The BIAS 136 generates and supplies a stable reference voltage and reference current to each circuit that controls the pixel array section 121 and processes output signals.
 タイミングジェネレータ137は、各種のタイミング信号を生成する。タイミングジェネレータによって生成された各種のタイミング信号は、画素アレイ部121の画素駆動回路やセンサI/F138に供給される。 The timing generator 137 generates various timing signals. Various timing signals generated by the timing generator are supplied to the pixel drive circuit of the pixel array section 121 and the sensor I/F 138 .
 センサI/F138は、画素アレイ部121から出力される画像データ(例えば、デジタル化された画像データ)を後段に出力するためのインターフェースである。センサI/F138は、タイミングジェネレータ137から供給されるタイミング信号に基づいて動作する。センサI/F138の後段に、検出部139及びカメラ信号処理部140が接続されている。 The sensor I/F 138 is an interface for outputting image data (for example, digitized image data) output from the pixel array unit 121 to a subsequent stage. The sensor I/F 138 operates based on timing signals supplied from the timing generator 137 . A detection unit 139 and a camera signal processing unit 140 are connected to the rear stage of the sensor I/F 138 .
 検出部139は、センサI/F138から供給される画像データに基づいて、当該画像データに顔や手等の人体の一部や所定のオブジェクトが含まれるか否かを検出する。所定のオブジェクトは、人体の一部の場合もあれば、人体の一部によって把持等される物体や、人体の一部と離れた位置に存在する物体の場合もあり得る。検出部139が行う検出処理に、DNN(Deep Neural Network)等の機械学習を行うことにより得られる学習モデルが適用されてもよい。検出部139は、バス151を介してシステムコントローラ131に対して検出結果を通知する。 Based on the image data supplied from the sensor I/F 138, the detection unit 139 detects whether or not the image data includes a part of the human body such as a face or a hand, or a predetermined object. The predetermined object may be a part of the human body, an object grasped by a part of the human body, or an object existing at a position separate from the part of the human body. A learning model obtained by performing machine learning such as DNN (Deep Neural Network) may be applied to the detection processing performed by the detection unit 139 . The detection unit 139 notifies the detection result to the system controller 131 via the bus 151 .
 カメラ信号処理部140は、センサI/F138から供給される画像データに対して公知の画像処理を行う。公知の画像処理としては、補間処理や色補正、欠陥補正等の処理が挙げられる。カメラ信号処理部140によるカメラ信号処理が施された画像データが、出力I/F141を介して制御部101に供給される。そして、制御部101の制御に基づいて画面表示部109が動作することで、ディスプレイ12に画像データに基づく画像が表示される。出力I/F141としては、例えばMIPI(Mobile Industry Processor Interface)を適用することができる。 The camera signal processing unit 140 performs known image processing on image data supplied from the sensor I/F 138 . Known image processing includes interpolation processing, color correction, defect correction, and the like. Image data subjected to camera signal processing by the camera signal processing unit 140 is supplied to the control unit 101 via the output I/F 141 . An image based on the image data is displayed on the display 12 by operating the screen display unit 109 based on the control of the control unit 101 . As the output I/F 141, for example, MIPI (Mobile Industry Processor Interface) can be applied.
 なお、フロントカメラ13A及びリアカメラ13Bは、それぞれが上述した撮像ユニット104に係る構成を有していてもよいし、一部の構成が共通化された構成でもよい。 Note that the front camera 13A and the rear camera 13B may each have a configuration related to the imaging unit 104 described above, or may have a configuration in which a part of the configuration is shared.
[スマートフォンの動作例]
 次に、本実施形態に係るスマートフォンの動作例について説明する。スマートフォン100では、電源がオンされている間に常時、人体の一部及び所定のオブジェクトを検出する処理が行われる。例えば、スリープ状態やロック状態であっても人体の一部及び所定のオブジェクトを検出する処理が行われる。勿論、スマートフォン100の操作中等、人体の一部及び所定のオブジェクトの検出を行っていない期間があってもよい。なお、検出対象の人体の一部及び所定のオブジェクトは、ユーザーによって設定されたものでもよいし、スマートフォン100に予め設定されたものでもよいし、検出対象の人体の一部及び所定のオブジェクトを示すデータをサーバーからダウンロードし、スマートフォン100に設定されたものでもよい。
[Smartphone operation example]
Next, an operation example of the smartphone according to this embodiment will be described. The smartphone 100 always performs a process of detecting a part of the human body and a predetermined object while the power is on. For example, even in a sleep state or a locked state, a process of detecting a part of the human body and a predetermined object is performed. Of course, there may be a period during which a part of the human body and a predetermined object are not detected, such as during operation of the smartphone 100 . Note that the part of the human body to be detected and the predetermined object may be set by the user, or may be preset in the smartphone 100. Data may be downloaded from a server and set in smartphone 100 .
 ところで、常時、人体の一部及び所定のオブジェクトを検出する処理を行うことは、消費電力の観点から好ましくない。そこで、本実施形態では、撮像素子104Bを低消費電力で動作させて人体の一部及び所定のオブジェクトを検出する処理を行う。具体的には、被写体を撮像する等の通常の撮像を行う通常モード(第1のモードの一例)の他に、通常モードよりも撮像素子104Bにおける消費電力が小さく、且つ、物体の外形程度を検出できるモード、より詳しくは人体の一部及び所定のオブジェクトを検出するための検出モード(第2のモードの一例)で撮像素子104Bが動作可能とされる。なお、本実施形態では、撮像素子104Bは、検出モードよりもさらに低消費電力で撮像を行うモードを有している。係るモードは、動体の有無を検出できる程度(動体の形状は捉えない)のモードであり、このモードを以下、動体検出モードと称する。また、本実施形態では、撮像素子104Bは、検出モードよりも消費電力が大きく、物体のパターンを認識可能な程度の画像データを得ることができる撮像モードを有している。このモードを以下、認識モードと称する。認識モードでは、例えば、多波長センシングによる物体の識別が行われる。 By the way, it is not preferable from the viewpoint of power consumption to always perform the process of detecting a part of the human body and a predetermined object. Therefore, in the present embodiment, processing for detecting a part of the human body and a predetermined object is performed by operating the imaging device 104B with low power consumption. Specifically, in addition to the normal mode (an example of the first mode) for performing normal imaging such as imaging an object, the power consumption of the imaging element 104B is lower than in the normal mode, and the outer shape of the object is reduced. The imaging element 104B is operable in a detectable mode, more specifically, in a detection mode (an example of the second mode) for detecting parts of the human body and predetermined objects. In addition, in this embodiment, the imaging element 104B has a mode in which imaging is performed with lower power consumption than in the detection mode. This mode is a mode that can detect the presence or absence of a moving object (the shape of the moving object is not captured), and this mode is hereinafter referred to as a moving object detection mode. Further, in this embodiment, the imaging device 104B has an imaging mode that consumes more power than the detection mode and can obtain image data that allows recognition of the pattern of the object. This mode is hereinafter referred to as recognition mode. In the recognition mode, for example, object identification is performed by multi-wavelength sensing.
 例えば、通常モードにおける撮像に関するパラメータ(以下、撮像パラメータと適宜、称する)を小さくすることにより、検出モードにおける消費電力が通常モードにおける消費電力よりも小さくされる。撮像パラメータには、撮像により得られる画像データのデータ量に関係するパラメータ及び撮像を行う際の駆動に関係するパラメータが含まれる。前者の具体例としては、画像データの解像度や階調、色、撮像領域(ROI(Region of Interest))、波長分解能等を挙げることができる。検出モードの撮像により得られる画像データの解像度等を通常モードの撮像により得られる画像データの解像度等よりも小さくすることで、検出モードでの撮影における消費電力を通常モードでの撮影における消費電力よりも小さくすることができる。後者の具体例としては、撮像素子104Bの駆動クロック数やフレームレート、各機能ブロックの動作状態/休止状態(非動作状態若しくは低電力での動作状態)に関する設定等を挙げることができる。検出モードの撮像時における駆動クロックやフレームレートを通常モードの撮像時における駆動クロックやフレームレートよりも小さくし、また、動作状態の機能ブロックを極力少なくすることで、検出モードでの撮像における消費電力を通常モードでの撮像における消費電力よりも小さくすることができる。同様にして撮像パラメータを小さくすることにより、動体検出モードでの撮像における消費電力を検出モードでの撮影における消費電力よりも小さくすることができる。認識モードは、動体検出モードや検出モードよりも撮像パラメータが大きく設定される。なお、通常モードにおける撮像パラメータと認識モードにおける撮像パラメータとは同一であってもよい。但し、認識モードでは撮像領域が最適化されることで、通常モードよりも低消費電力でスマートフォン100が動作する。 For example, by reducing parameters related to imaging in the normal mode (hereinafter referred to as imaging parameters as appropriate), the power consumption in the detection mode is made smaller than the power consumption in the normal mode. The imaging parameters include parameters related to the amount of image data obtained by imaging and parameters related to driving during imaging. Specific examples of the former include resolution, gradation, color, imaging region (ROI (Region of Interest)), and wavelength resolution of image data. By making the resolution, etc. of image data obtained by imaging in the detection mode smaller than the resolution, etc. of image data obtained by imaging in the normal mode, the power consumption in imaging in the detection mode is lower than the power consumption in imaging in the normal mode. can also be made smaller. Specific examples of the latter include settings related to the number of drive clocks for the imaging device 104B, the frame rate, the operating state/idle state (non-operating state or operating state with low power) of each functional block, and the like. The driving clock and frame rate during imaging in the detection mode are set lower than those during imaging in the normal mode, and the number of functional blocks in the operating state is reduced as much as possible to reduce the power consumption during imaging in the detection mode. can be made smaller than the power consumption in imaging in the normal mode. Similarly, by reducing the imaging parameters, the power consumption in imaging in the moving object detection mode can be made smaller than the power consumption in imaging in the detection mode. The recognition mode is set with larger imaging parameters than the moving object detection mode and the detection mode. Note that the imaging parameters in the normal mode and the imaging parameters in the recognition mode may be the same. However, since the imaging area is optimized in the recognition mode, the smartphone 100 operates with lower power consumption than in the normal mode.
 撮像素子104Bが図3に示す構成の場合は、各機能ブロックに対するモード毎の制御は、例えば以下のようになる。
・モードに関係なく、通常通りの電力が供給され動作するブロック・・・システムコントローラ131、BIAS136
・通常モード時に電力が供給されずに、動作しないブロック・・・パワー制御部132、検出部139、リングオシレータ134
・通常モード時のみに動作するブロック・・・PLL135、カメラ信号処理部140、出力I/F141
・検出モードや動体検出モード、認識モード時に、撮像パラメータが調整されることで、消費電力の最適化が行われるブロック・・・画素アレイ部121、クロック生成回路133、タイミングジェネレータ137、センサI/F138
When the imaging device 104B has the configuration shown in FIG. 3, control of each functional block for each mode is as follows, for example.
・Blocks that are supplied with power and operate as usual regardless of the mode: system controller 131, BIAS 136
・Blocks that do not operate because power is not supplied in the normal mode: power control section 132, detection section 139, ring oscillator 134
・Blocks that operate only in normal mode: PLL 135, camera signal processing unit 140, output I/F 141
A block in which power consumption is optimized by adjusting imaging parameters in detection mode, moving object detection mode, and recognition mode: pixel array section 121, clock generation circuit 133, timing generator 137, sensor I/ F138
 なお、スマートフォン100の動作モードが、通常モード以外の検出モードや動体検出モード、認識モードであるときは、制御部101のアプリケーションプロセッサ101Aは動作していない状態若しくは低消費電力での動作状態(以下、休止状態と適宜、総称する。)となる。これにより、撮像素子104Bだけでなく、制御部101における消費電力も低減できる。 Note that when the operation mode of the smartphone 100 is a detection mode, a moving body detection mode, or a recognition mode other than the normal mode, the application processor 101A of the control unit 101 is in a non-operating state or a low power consumption operating state (hereinafter , collectively referred to as hibernation as appropriate). Thereby, not only the power consumption of the image sensor 104B but also the power consumption of the control unit 101 can be reduced.
 図4のフローチャートを参照しつつ、スマートフォン100の具体的な動作例について説明する。 A specific operation example of the smartphone 100 will be described with reference to the flowchart of FIG.
 ステップST11では、スマートフォン100の動作モードを動体検出モードにする制御が行われる。例えば、スマートフォン100がスリープ状態やロック状態となったことや、位置情報が所定の場所の範囲内になったことをトリガーとして、動体検出モードによる撮像が開始される。例えば、制御部101は、上述したトリガーが発生した場合に、その旨を撮像素子104Bのシステムコントローラ131に通知する。制御部101からの通知を受けたシステムコントローラ131が、動体検出モードによる撮像に必要な機能ブロックを動作状態にし、動体検出モードによる撮像に不要な機能ブロックを休止状態にする。 In step ST11, control is performed to set the operation mode of the smartphone 100 to the moving object detection mode. For example, imaging in the moving object detection mode is started when the smartphone 100 is in a sleep state or locked state, or when the position information is within a predetermined range. For example, when the above-described trigger occurs, the control unit 101 notifies the system controller 131 of the imaging device 104B of that fact. The system controller 131 that has received the notification from the control unit 101 activates the functional blocks necessary for imaging in the moving object detection mode, and deactivates the functional blocks that are unnecessary for imaging in the moving object detection mode.
 具体的には、システムコントローラ131は、パワー制御部132、リングオシレータ134、及び、検出部139を動作状態にする制御を行う。また、システムコントローラ131は、PLL135、カメラ信号処理部140、出力I/F141を休止状態にする制御を行う。そして、システムコントローラ131は、動体検出モードに対応する撮像パラメータ(例えば、解像度や階調、駆動クロック、フレームレート等)で、画素アレイ部121の駆動回路やクロック生成回路133、タイミングジェネレータ137、センサI/F138が動作するように、各部に対して撮像パラメータを設定する。なお、動体検出モードに対応するパラメータは、例えば、動体の有無を検出できる程度に予め設定されたパラメータである。例えば、画素の出力の変化から動体の有無を検出できるので、駆動画素は理論的には1画素であってもよい。動作検出モードに対応するパラメータで動作する機能ブロックにおける消費電力は、検出モードや通常モードでの動作時における消費電力よりも小さくてよい。電力供給を小さくする制御は、パワー制御部132によって行われる。そして、処理がステップST12に進む。 Specifically, the system controller 131 controls the power control section 132, the ring oscillator 134, and the detection section 139 to operate. The system controller 131 also controls the PLL 135, the camera signal processing unit 140, and the output I/F 141 to be in a rest state. The system controller 131 controls the driving circuit of the pixel array unit 121, the clock generation circuit 133, the timing generator 137, the sensor, and the like, using imaging parameters (for example, resolution, gradation, driving clock, frame rate, etc.) corresponding to the moving object detection mode. An imaging parameter is set for each unit so that the I/F 138 operates. The parameter corresponding to the moving body detection mode is, for example, a parameter set in advance to the extent that the presence or absence of a moving body can be detected. For example, the presence or absence of a moving object can be detected from changes in pixel output, so the number of drive pixels may theoretically be one. Power consumption in functional blocks operating with parameters corresponding to the operation detection mode may be smaller than power consumption during operation in the detection mode or the normal mode. Control for reducing the power supply is performed by the power control section 132 . Then, the process proceeds to step ST12.
 ステップST12では、動体検出モードによる撮像の結果、動体が検出されたか否かが判断される。例えば、検出部139が、センサI/F138から出力される画像データに基づいて、動体の検出の有無を判断する。検出部139は、例えば、所定のタイミング毎に出力される画素の画素値が一定以上変化した場合には動体が有ると判断し、画素値の一定以上の変化がない場合は、動体が無いと判断する。検出部139は、判断結果をシステムコントローラ131に通知する。動体が無い場合(Noの場合)は、処理がステップST11に戻り、動体検出モードによる撮像が繰り返される。動体が有る場合(Yesの場合)は、処理がステップST13に進む。 At step ST12, it is determined whether or not a moving object has been detected as a result of imaging in the moving object detection mode. For example, the detection unit 139 determines whether or not a moving object is detected based on image data output from the sensor I/F 138 . For example, the detection unit 139 determines that there is a moving object when the pixel value of the pixel output at each predetermined timing changes by a certain amount or more, and determines that there is no moving object when the pixel value does not change by a certain amount or more. to decide. The detector 139 notifies the system controller 131 of the determination result. If there is no moving object (No), the process returns to step ST11, and imaging in the moving object detection mode is repeated. If there is a moving object (Yes), the process proceeds to step ST13.
 ステップST13では、検出モードによる撮像が行われる。例えば、動体が有るとの判断結果を検出部139から通知されたシステムコントローラ131、検出モードに対応する撮像パラメータで、画素アレイ部121の駆動回路やクロック生成回路133、タイミングジェネレータ137、センサI/F138が動作するように、各部に対して撮像パラメータを設定する。なお、検出モードに対応するパラメータは、例えば、人体の一部と所定のオブジェクトとを検出できる程度に予め設定されたパラメータである。検出モードでは動体検出モードよりも認識精度を向上させる必要があるため、検出モードよりも機能ブロックに供給される電力を大きくする必要がある。電力供給を大きくする制御は、パワー制御部132によって行われる。そして、処理がステップST14に進む。 In step ST13, imaging is performed in the detection mode. For example, the system controller 131 notified of the determination result that there is a moving object from the detection unit 139 uses the imaging parameters corresponding to the detection mode to drive the pixel array unit 121, the clock generation circuit 133, the timing generator 137, and the sensor interface. The imaging parameters are set for each unit so that F138 operates. The parameter corresponding to the detection mode is, for example, a parameter set in advance to the extent that a part of the human body and a predetermined object can be detected. Since it is necessary to improve the recognition accuracy in the detection mode more than in the moving body detection mode, it is necessary to increase the power supplied to the functional block compared to the detection mode. Control for increasing power supply is performed by the power control unit 132 . Then, the process proceeds to step ST14.
 ステップST14では、検出モードによる撮像の結果、人体の一部及び所定のオブジェクトが検出されたか否かが判断される。例えば、検出部139が、センサI/F138から出力される画像データに基づいて、人体の一部及び所定のオブジェクトが検出された否かが判断される。検出処理としては、公知の処理を適用することができる。例えば、人体の一部を検出する処理としては、肌色を検出する処理や、関節等の特徴点を検出する処理等を適用できる。また、所定のオブジェクトを検出する処理としては、機械学習により得られる学習モデルを使用する処理や、輪郭検出等の処理を適用することができる。検出部139は、判断結果をシステムコントローラ131に通知する。なお、通知の内容が、人体の一部及び所定のオブジェクトが検出されたことを示す内容である場合は、システムコントローラ131は、制御部101に対して割り込みを送信する。割り込みを受けた制御部101は、アプリケーションプロセッサ101Aの動作状態を休止状態から動作状態にする。人体の一部と所定のオブジェクトが検出されない場合(Noの場合)は、処理がステップST13に戻り、検出モードによる撮像が繰り返される。人体の一部と所定のオブジェクトが検出された場合(Yesの場合)は、処理がステップST15に進む。 In step ST14, it is determined whether or not a part of the human body and a predetermined object are detected as a result of imaging in the detection mode. For example, based on the image data output from the sensor I/F 138, the detection unit 139 determines whether a part of the human body and a predetermined object have been detected. A known process can be applied as the detection process. For example, as processing for detecting a part of the human body, processing for detecting skin color, processing for detecting feature points such as joints, and the like can be applied. As the process for detecting a predetermined object, a process using a learning model obtained by machine learning, a process such as contour detection, etc. can be applied. The detector 139 notifies the system controller 131 of the determination result. Note that when the content of the notification indicates that a part of the human body and a predetermined object have been detected, the system controller 131 transmits an interrupt to the control unit 101 . The control unit 101 that has received the interrupt changes the operation state of the application processor 101A from the sleep state to the operation state. If the part of the human body and the predetermined object are not detected (No), the process returns to step ST13, and the imaging in the detection mode is repeated. If the part of the human body and the predetermined object are detected (Yes), the process proceeds to step ST15.
 ステップST15では、認識モードでの撮像が行われる。検出部139は、認識モードでの撮像により得られた画像データをシステムコントローラ131に供給する。システムコントローラ131は、人体の一部や所定のオブジェクトに関する認識を行い、人体の一部や所定のオブジェクトに関する詳細な情報を得、この情報に基づいて起動すべきアプリケーションを認識する。認識結果がアプリケーションプロセッサ101Aに供給される。なお、認識モードによる撮像では、例えば、検出モードで検出された人体の一部の領域や所定のオブジェクトの領域のみが高画質で(例えば高解像度で)撮像される。これにより、認識モードによる撮像での消費電力を抑制できる。そして、処理がステップST16に進む。 In step ST15, imaging is performed in the recognition mode. The detection unit 139 supplies image data obtained by imaging in the recognition mode to the system controller 131 . The system controller 131 recognizes a part of the human body or a predetermined object, obtains detailed information about the part of the human body or the predetermined object, and recognizes an application to be activated based on this information. A recognition result is supplied to the application processor 101A. Note that, in imaging in the recognition mode, for example, only a partial area of the human body or a predetermined object area detected in the detection mode is imaged with high image quality (for example, at high resolution). As a result, it is possible to suppress power consumption in imaging in the recognition mode. Then, the process proceeds to step ST16.
 ステップST16では、動作状態となったアプリケーションプロセッサ101Aが、システムコントローラ131から供給された認識結果に対応するアプリケーションを起動する。そして、処理がステップST17に進む。 In step ST16, the application processor 101A, which has entered an operating state, launches an application corresponding to the recognition result supplied from the system controller 131. Then, the process proceeds to step ST17.
 ステップST17では、アプリケーションプロセッサ101Aが起動したアプリケーションを実行する。上述したように、アプリケーションの起動後、アプリケーションのどの段階(入力の段階や登録までを行う段階)までを実行するかについては適宜、変更可能である。但し、アプリケーションの実行結果が、ユーザーが認識できるように報知されることが好ましい。例えば、アプリケーションの実行結果がディスプレイ12に表示されたり、スピーカ107を介して音声により報知されることが好ましい。 At step ST17, the application started by the application processor 101A is executed. As described above, it is possible to appropriately change up to which stage of the application (the input stage and the stage up to registration) to be executed after the application is started. However, it is preferable that the execution result of the application be notified so that the user can recognize it. For example, it is preferable that the execution result of the application be displayed on the display 12 or be notified by voice through the speaker 107 .
 なお、動体検出モード、検出モード及び認識モードによる撮像は、フロントカメラ13Aが用いられる場合もあれば、リアカメラ13Bが用いられる場合もある。 It should be noted that the moving object detection mode, detection mode, and recognition mode may use the front camera 13A or the rear camera 13B in some cases.
[アプリケーションの具体例]
(第1の例)
 次に、人体の一部及び所定のオブジェクトの具体例、及び、人体の一部と所定のオブジェクトの検出結果に応じて起動されるアプリケーションの具体例について説明する。なお、以下に説明する具体例は一例であり、他の例があってもよいし、本開示に係る電子機器が、以下の具体例の一部又は全部に対応していてもよい。また、検出される人体の一部及び所定のオブジェクトと、検出結果に応じて起動されるアプリケーションとの組み合わせについては、スマートフォン100に予め設定されていてもよいし、ユーザーが設定できるようにしてもよいし、スマートフォン100が機械学習を行うことにより自動で設定されるようにしてもよい。
[Specific example of application]
(first example)
Next, a specific example of a human body part and a predetermined object, and a specific example of an application activated according to the detection result of the human body part and the predetermined object will be described. Note that the specific examples described below are merely examples, and other examples may exist, and the electronic device according to the present disclosure may correspond to some or all of the following specific examples. Further, the combination of the detected part of the human body and the predetermined object and the application activated according to the detection result may be preset in the smartphone 100, or may be set by the user. Alternatively, the smart phone 100 may be automatically set by performing machine learning.
 第1の例は、人体の一部がユーザーの手であり、所定のオブジェクトが、ユーザーの手で保持や把持されている物体であり、アプリケーションがカメラである例である。本例では、アプリケーションプロセッサ101Aは、人体の一部が物体に対する操作指示(例えば、撮像指示)と認識し、認識結果に応じてカメラによる撮像を行う制御を実行する。 A first example is an example in which the part of the human body is the user's hand, the predetermined object is an object held or gripped by the user's hand, and the application is a camera. In this example, the application processor 101A recognizes a part of the human body as an instruction to operate an object (for example, an instruction to capture an image), and executes control for capturing an image with the camera according to the recognition result.
 例えば、図5Aに示すように、検出モードによる撮像の結果、ユーザーの手21Aと手21B、及び、文書22を含む画像データが得られたとする。また、認識モードによる撮像の結果、手21Aと手21Bとが文書22を保持している状態であることが検出される。 For example, as shown in FIG. 5A, image data including the user's hands 21A and 21B and the document 22 is obtained as a result of imaging in the detection mode. Further, as a result of imaging in the recognition mode, it is detected that the hands 21A and 21B are holding the document 22. FIG.
 検出部139は、人体の一部と所定のオブジェクトが検出されたことをシステムコントローラ131に通知する。また、検出部139は、認識モードで得られた画像データをシステムコントローラ131に出力する。システムコントローラ131は、画像データに対する認識処理を行うことで、文書の文字や領域、ジェスチャーに対応する操作の内容を認識する。本例では、両手(手21A、21B)が検出され、且つ、両手の間に物体が存在することから、システムコントローラ131は、起動すべきアプリケーションがカメラであり、カメラによって両手の間に位置する物体を撮像する操作指示であると認識する。 The detection unit 139 notifies the system controller 131 that a part of the human body and a predetermined object have been detected. The detection unit 139 also outputs image data obtained in the recognition mode to the system controller 131 . The system controller 131 recognizes the contents of operations corresponding to characters and regions of a document and gestures by performing recognition processing on image data. In this example, since both hands ( hands 21A and 21B) are detected and an object is present between both hands, the system controller 131 determines that the camera is the application to be started, and that the camera is positioned between both hands. It is recognized as an operation instruction to image an object.
 システムコントローラ131は、認識結果をアプリケーションプロセッサ101Aに通知する。通知を受けたアプリケーションプロセッサ101Aは、撮像ユニット104に対して撮像を行うための制御を行う。制御に応じて撮像ユニット104が動作し、撮像ユニット104による撮像が行われる。具体的には、文書22が存在する領域が撮像素子104Bにより撮像される。なお、文書22の存在領域はROIであると考えられることから、文書22の撮像は高画質で行われることが好ましい。従って、文書22の撮像は、例えば、通常モードでの撮像により行われる。通常モードでの撮像により得られた文書22の画像データは、カメラ信号処理部140により高画質にするための画像処理が行われ、画像処理が施された画像データ(図5C参照)が出力I/F141を介して制御部101に出力される。文書22の画像データは、ディスプレイ12に表示されたり、メモリに記憶される。 The system controller 131 notifies the application processor 101A of the recognition result. The application processor 101A that has received the notification controls the imaging unit 104 to perform imaging. The imaging unit 104 operates according to the control, and the imaging unit 104 performs imaging. Specifically, the area where the document 22 exists is imaged by the imaging device 104B. Note that the document 22 is considered to exist in the ROI, so it is preferable that the image of the document 22 is captured with high image quality. Therefore, the document 22 is imaged, for example, in the normal mode. The image data of the document 22 obtained by imaging in the normal mode is subjected to image processing for improving image quality by the camera signal processing unit 140, and the image data subjected to image processing (see FIG. 5C) is output I /F141 to the control unit 101. Image data of the document 22 is displayed on the display 12 or stored in memory.
(第2の例)
 第2の例は、人体の一部がユーザーの指であり、所定のオブジェクトが、指先に位置する物体である例である。物体は、指先で掴まれていてもよい。この場合に起動されるアプリケーションは、指先に位置する物体に関連する情報を登録するアプリケーションである。指先に位置する物体に関連する情報は、例えば、当該物体の使用履歴に関する情報である。
(Second example)
A second example is an example in which the part of the human body is the user's finger and the predetermined object is an object positioned at the fingertip. The object may be grasped with fingertips. The application launched in this case is an application that registers information related to the object located at the fingertip. The information related to the object positioned at the fingertip is, for example, information related to the usage history of the object.
 例えば、図6Aに示すように、検出モードによる撮像の結果、親指23Aと人差し指23Bとを含むユーザーの手23、及び、親指23Aの指先と人差し指23Bの指先に位置する薬の錠剤24を含む画像データが得られたとする。検出部139は、人体の一部と所定のオブジェクトが検出されたことをシステムコントローラ131に出力する。また、認識モードで得られた画像データがシステムコントローラ131に供給される。システムコントローラ131は、画像データに対する認識処理を行うことで、親指23Aの指先と人差し指23Bの指先の位置情報や、指先にある物体が薬の錠剤24であることや、当該薬の種類等を認識し、さらに、ジェスチャーに対応する操作の内容を認識する。本例では、指先に薬の錠剤24が存在することから、システムコントローラ131は、起動すべきアプリケーションが薬の錠剤24の使用履歴を登録するアプリケーションであると認識する。システムコントローラ131は、認識結果をアプリケーションプロセッサ101Aに通知する。 For example, as shown in FIG. 6A, as a result of imaging in the detection mode, an image including the user's hand 23 including the thumb 23A and the index finger 23B, and the medicine tablet 24 positioned at the fingertips of the thumb 23A and the index finger 23B. Suppose we have data. The detection unit 139 outputs to the system controller 131 that the part of the human body and the predetermined object have been detected. Also, image data obtained in the recognition mode is supplied to the system controller 131 . By performing recognition processing on image data, the system controller 131 recognizes the positional information of the fingertips of the thumb 23A and the index finger 23B, the fact that the object at the fingertips is the drug tablet 24, the type of the drug, and the like. Furthermore, it recognizes the content of the operation corresponding to the gesture. In this example, since the medicine tablet 24 is present at the fingertip, the system controller 131 recognizes that the application to be started is an application for registering the use history of the medicine tablet 24 . The system controller 131 notifies the recognition result to the application processor 101A.
 通知を受けたアプリケーションプロセッサ101Aは、薬の使用履歴を登録するアプリケーションを起動する。そして、当該アプリケーションに薬の服用履歴(例えば、服用した薬の種類や日時)を登録する。図6Bは、アプリケーションにおける薬の登録画面の一例を示す。図6Bに示す画面がディスプレイ12に表示される。本例によれば、忘れがちな薬の服用履歴を簡単に登録することができる。 Upon receiving the notification, the application processor 101A activates an application for registering the medicine usage history. Then, the medicine taking history (for example, the type of medicine taken and the date and time) is registered in the application. FIG. 6B shows an example of a drug registration screen in the application. A screen shown in FIG. 6B is displayed on the display 12 . According to this example, it is possible to easily register the taking history of medicines that tend to be forgotten.
 なお、指先に位置する物体は、カレンダーの日付でもよい。この場合、カレンダーの日付に関連する情報としては、当該日付の欄に記入されている予定を挙げることができる。例えば、指先及び当該指先に位置するカレンダーが検出された場合には、アプリケーションとして例えばスマートフォン100のスケジュール機能が起動され、カレンダーの日付に関連する情報である予定が、スケジュール機能に登録される。 The object located at the fingertip may be the date on the calendar. In this case, the information related to the date on the calendar can include the schedule entered in the column for that date. For example, when the fingertip and the calendar positioned at the fingertip are detected, the schedule function of the smartphone 100 is activated as an application, and the schedule, which is information related to the date of the calendar, is registered in the schedule function.
 また、人体の一部は、指ではなく例えば顔であってもよく、アプリケーションは、薬の服用履歴ではなく化粧品の使用履歴を登録するアプリケーションであってもよい。例えば、検出モードによる撮像によって、図7に示すように、ユーザーの顔25及び容器26が検出されたとする。また、認識モードによる撮像により、容器26が化粧水の容器であることがシステムコントローラ131により認識される。この場合、システムコントローラ131は、起動すべきアプリケーションが例えば化粧品の使用履歴を登録するアプリケーションと認識する。システムコントローラ131は、認識結果をアプリケーションプロセッサ101Aに通知する。 Also, the part of the human body may be, for example, a face instead of a finger, and the application may be an application that registers the use history of cosmetics instead of the history of taking medicine. For example, it is assumed that the user's face 25 and the container 26 are detected by imaging in the detection mode, as shown in FIG. Further, the imaging in the recognition mode allows the system controller 131 to recognize that the container 26 is a container for lotion. In this case, the system controller 131 recognizes that the application to be activated is an application for registering the use history of cosmetics, for example. The system controller 131 notifies the recognition result to the application processor 101A.
 通知を受けたアプリケーションプロセッサ101Aは、化粧品の使用履歴を登録するアプリケーションを起動する。そして、当該アプリケーションに化粧品の使用履歴(例えば、使用した化粧品の種類や、使用した日時)を登録する。このように、人体の一部と所定のオブジェクトの組み合わせは適宜、変更でき、組み合わせに応じて起動されるアプリケーションも適宜、変更できる。 Upon receiving the notification, the application processor 101A activates an application for registering the use history of cosmetics. Then, the use history of cosmetics (for example, the type of cosmetics used and the date and time of use) is registered in the application. In this way, the combination of the part of the human body and the predetermined object can be changed as appropriate, and the application activated according to the combination can also be changed as appropriate.
(第3の例)
 第3の例は、検出モードによる撮像により、人体の一部、及び、当該人体の一部とは異なる人体の一部(本例における所定のオブジェクト)が検出され、それぞれの人体の一部の組み合わせに対応するアプリケーションが実行される例である。本例では、人体の一部が一方の手の指であり、人体の一部とは異なる人体の一部が他方の手の指である。また、それぞれの手の指の組み合わせに応じて起動されるアプリケーションが、アラームの例である。
(Third example)
In a third example, a part of the human body and a part of the human body different from the part of the human body (predetermined object in this example) are detected by imaging in the detection mode, and each part of the human body is detected. This is an example in which an application corresponding to the combination is executed. In this example, the human body part is the fingers of one hand, and the human body part different from the human body part is the fingers of the other hand. An example of an alarm is an application that is activated according to the combination of fingers on each hand.
 例えば、図8Aに示すように、検出モードによる撮像の結果、一方の手の指31と他方の手の指32とを含む画像データが得られたとする。検出部139は、それぞれの手の指が検出されたことをシステムコントローラ131に通知する。また、認識モードによる撮像により得られた画像データがシステムコントローラ131に供給される。システムコントローラ131は、画像データに対する認識処理を行うことで、指31及び指32のそれぞれの位置や、指の本数を認識する。本例では、画像データに指31と指32とが重なるように存在することから、システムコントローラ131は、起動すべきアプリケーションが例えばアラームであると認識する。システムコントローラ131は、認識結果をアプリケーションプロセッサ101Aに通知する。 For example, as shown in FIG. 8A, it is assumed that image data including fingers 31 of one hand and fingers 32 of the other hand are obtained as a result of imaging in the detection mode. The detection unit 139 notifies the system controller 131 that the fingers of each hand have been detected. Also, image data obtained by imaging in the recognition mode is supplied to the system controller 131 . The system controller 131 recognizes the positions of the fingers 31 and 32 and the number of fingers by performing recognition processing on the image data. In this example, since the fingers 31 and 32 are present in the image data so as to overlap each other, the system controller 131 recognizes that the application to be activated is, for example, an alarm. The system controller 131 notifies the recognition result to the application processor 101A.
 通知を受けたアプリケーションプロセッサ101Aは、アラームを設定するアプリケーションを起動する。そして、アプリケーションプロセッサ101Aは、指31及び指32の合計本数(本例では7本)に対応する時刻(本例では7時)にアラームを設定する。図8Bは、アプリケーションにおけるアラーム設定画面の一例を示す。図8Bに示す画面(7:00にアラームが自動で設定された画面)がディスプレイ12に表示される。本例によれば、アラームを簡単に設定することができる。 Upon receiving the notification, the application processor 101A activates an application that sets an alarm. Then, the application processor 101A sets an alarm at a time (7 o'clock in this example) corresponding to the total number of fingers 31 and 32 (7 in this example). FIG. 8B shows an example of an alarm setting screen in the application. The screen shown in FIG. 8B (the screen in which the alarm is automatically set at 7:00) is displayed on the display 12 . According to this example, an alarm can be easily set.
 なお、本例におけるアプリケーションはアラームに限定されることはない。例えば、本例におけるアプリケーションが、スマートフォン100の設定であってもよい。スマートフォン100に、スマートフォン100の私的での使用に対応するモード(以下、プライベートモードと適宜、称する)と、スマートフォン100の仕事での使用に対応するモード(以下、オフィスモードと適宜、称する)とが設定可能とされる。この場合、指の組み合わせに対応するアプリケーションが、プライベートモード又はオフィスモードの何れかの設定に関する制御であってもよい。プライベートモードとオフィスモードとでは、例えば、接続先の通信キャリア(回線事業者)、各種の通知が行われるアプリケーションの種類、ディスプレイ12におけるアイコンの配列が異なる。 Note that the application in this example is not limited to alarms. For example, the application in this example may be the settings of the smart phone 100 . Smartphone 100 has a mode corresponding to private use of smartphone 100 (hereinafter referred to as private mode as appropriate) and a mode corresponding to work use of smartphone 100 (hereinafter referred to as office mode as appropriate). can be set. In this case, the application corresponding to the finger combination may be the control for setting either private mode or office mode. The private mode and the office mode differ, for example, in the destination communication carrier (line operator), the type of application for which various notifications are made, and the arrangement of icons on the display 12 .
 例えば、図9A及び図9Bに示すように、検出モードによる撮像の結果、一方の手の指31と他方の手の指32とを含む画像データが得られたとする。検出部139は、指31と指32とが検出されたことをシステムコントローラ131に通知する。また、認識モードによる撮像により得られた画像データがシステムコントローラ131に供給される。システムコントローラ131は、画像データに対する認識処理を行うことで、指31や指32の位置、本数等を認識し、さらに、起動すべきアプリケーションがモード設定制御であることを認識する。システムコントローラ131は、認識結果をアプリケーションプロセッサ101Aに通知する。 For example, as shown in FIGS. 9A and 9B, image data including fingers 31 of one hand and fingers 32 of the other hand is obtained as a result of imaging in the detection mode. The detection unit 139 notifies the system controller 131 that the fingers 31 and 32 have been detected. Also, image data obtained by imaging in the recognition mode is supplied to the system controller 131 . The system controller 131 recognizes the positions and number of the fingers 31 and 32 by performing recognition processing on the image data, and further recognizes that the application to be activated is mode setting control. The system controller 131 notifies the recognition result to the application processor 101A.
 通知を受けたアプリケーションプロセッサ101Aは、モードを設定するアプリケーションを立ち上げる。アプリケーションプロセッサ101Aは、指31及び指32の合計本数が図9Aのように6本の場合は、プライベートモードに対応する設定を行う。また、アプリケーションプロセッサ101Aは、指31及び指32の合計本数が図9Bのように7本の場合は、オフィスモードに対応する設定を行う。勿論、それぞれのモードに対応する指の本数は6本や7本に限定されることはない。 The application processor 101A that received the notification launches the application for setting the mode. When the total number of fingers 31 and fingers 32 is six as shown in FIG. 9A, the application processor 101A makes settings corresponding to the private mode. Also, when the total number of fingers 31 and 32 is seven as shown in FIG. 9B, the application processor 101A performs settings corresponding to the office mode. Of course, the number of fingers corresponding to each mode is not limited to six or seven.
[本実施形態により得られる効果]
 以上、説明した本実施形態によれば、低消費電力での撮像によって、人体の一部及び所定のオブジェクトが検出された場合に、検出された人体の一部及び所定のオブジェクトに対応するアプリケーションを自動で起動させることができる。ユーザーは、スマートフォンのロック状態を解除したり、多数のアプリケーションから所望のアプリケーションを選択する操作を行う必要がないため、アプリケーションを起動させる際に必要な操作を大幅に削減することができる。すなわち、消費電力を抑制しつつ、アプリケーションを実行する際の操作性を向上させることができる。
 検出モードによる撮像は低消費電力で行われるため、長時間にわたって検出モードによる撮像を行った場合であっても、スマートフォンの電池の残容量が大幅に低下してしまうことがない。長時間にわたって検出モードによる撮像を行うことができるので、様々なアクション(人体の一部と所定のオブジェクトとの組み合わせ)を検出でき、検出結果に対応するアプリケーションを起動させることができる。すなわち、特定の時間帯に実行され易いアプリケーションだけでなく、種々のタイミングや場所で実行され得るアプリケーション(例えば、撮像や決済、スケジュール等のアプリケーション)についても容易に起動させることができる。
[Effect obtained by this embodiment]
According to the present embodiment described above, when a part of the human body and a predetermined object are detected by imaging with low power consumption, an application corresponding to the detected part of the human body and the predetermined object is executed. can be started automatically. The user does not need to unlock the smartphone or select the desired application from a large number of applications, so the operations required to start the application can be greatly reduced. That is, it is possible to improve operability when executing an application while suppressing power consumption.
Since imaging in the detection mode is performed with low power consumption, even when imaging in the detection mode is performed for a long period of time, the remaining battery capacity of the smartphone does not decrease significantly. Since imaging in the detection mode can be performed for a long time, various actions (a combination of a part of the human body and a predetermined object) can be detected, and an application corresponding to the detection result can be activated. That is, it is possible to easily start not only applications that are likely to be executed in a specific time period, but also applications that can be executed at various timings and locations (for example, applications for imaging, payment, scheduling, etc.).
<変形例>
 以上、本開示の実施形態について具体的に説明したが、本開示の内容は上述した実施形態に限定されるものではなく、本開示の技術的思想に基づく各種の変形が可能である。なお、一実施形態と同一または同質の構成については同一の参照符号を付し、重複した説明を適宜、省略する。
<Modification>
Although the embodiments of the present disclosure have been specifically described above, the content of the present disclosure is not limited to the above-described embodiments, and various modifications are possible based on the technical ideas of the present disclosure. In addition, the same reference numerals are given to the same or similar configurations as those of one embodiment, and redundant explanations are appropriately omitted.
[変形例1]
 始めに、変形例1について説明する。本変形例及び以下の変形例2から変形例7では、撮像素子104Bが一実施形態と同様に通常モードと通常モードよりも消費電力が小さいモードでの撮像を行うことが可能とされている。消費電力が小さい撮像のモードとは、少なくとも電子機器をユーザーが視認状態であるか否かを検出することが可能な範囲で、信号処理回路104Cの適宜な機能ブロックに電力供給がなされることで、消費電力を抑制するモードである。このモードに対応する電力供給がなされた信号処理回路104Cが、ユーザーが視認状態であるか否かを検出する処理を行う。電子機器をユーザーが視認状態であるか否かを検出することが可能な範囲で、且つ、通常モードに比べて低消費電力で行われる撮像のモードを以下、視認状態検出モードと適宜、称する。視認状態検出モードでの撮像は通常モードでの撮像よりも低消費電力であるものの、電力供給の制約がある。従って、視認状態検出モードでの撮像により得られる画像の画質は、通常モードでの撮像により得られる画質よりも低くなる。なお、視認状態とは、ユーザーがスマートフォン等の電子機器を視ている状態を意味する。また、視認状態にあるユーザーを視認状態者と適宜、称する。ユーザーの具体例としては人物が挙げられるが、ロボットやペット等の動物であってもよい。アプリケーションプロセッサ101Aは、撮像素子104Bによる検出の結果が視認状態である場合に、所定のアプリケーションを実行する。以下、変形例の具体的な内容について説明する。
[Modification 1]
First, modification 1 will be described. In this modification and Modifications 2 to 7 described below, the imaging element 104B can perform imaging in the normal mode and in a mode with lower power consumption than the normal mode, as in the first embodiment. The imaging mode with low power consumption means that power is supplied to an appropriate functional block of the signal processing circuit 104C at least within a range in which it is possible to detect whether or not the electronic device is being viewed by the user. , which is a mode for suppressing power consumption. The signal processing circuit 104C to which power corresponding to this mode is supplied performs processing for detecting whether or not the user is in the viewing state. The imaging mode in which the user can detect whether or not the electronic device is in the visible state and consumes less power than the normal mode is hereinafter referred to as the visible state detection mode as appropriate. Although imaging in the visual recognition state detection mode consumes less power than imaging in the normal mode, there are restrictions on power supply. Therefore, the image quality obtained by imaging in the visual recognition state detection mode is lower than the image quality obtained by imaging in the normal mode. Note that the visible state means a state in which the user is viewing an electronic device such as a smartphone. Also, the user in the visible state is appropriately referred to as the visible state person. A specific example of the user is a person, but may be an animal such as a robot or a pet. The application processor 101A executes a predetermined application when the result of detection by the imaging element 104B is the visible state. Specific contents of the modified example will be described below.
 図10Aから図10Cは、本変形例の内容を概略的に説明するための図である。図10Aは、ディスプレイ12に表示された地図をスマートフォン100のユーザーUが視ている状態を示している。この状態で、スマートフォン100側では、フロントカメラ13Aを使用して、ユーザーUがスマートフォン100を視ている状態であるか否か、すなわち、視認状態であるか否かが判断される。 10A to 10C are diagrams for schematically explaining the contents of this modification. FIG. 10A shows a state in which user U of smartphone 100 is viewing a map displayed on display 12 . In this state, the smartphone 100 side uses the front camera 13A to determine whether or not the user U is looking at the smartphone 100, that is, whether or not it is in a visible state.
 ユーザーUが視認状態であることが検出されると、アプリケーションプロセッサ101Aは、所定のアプリケーションの一例として障害物検出処理を行う。具体的には、図10Bに示すように、リアカメラ13Bを使用した撮像が行われる。例えば、リアカメラ13Bがオフに制御されている場合にはリアカメラ13Bが起動される。そして、リアカメラ13Bに対して通常モードでの撮像を行うための設定がなされた後、リアカメラ13Bによる撮像が開始される。リアカメラ13Bが通常モードで撮像を行うことから、当該撮像で得られる画像の画質は視認状態検出モードでの撮像で得られる画像よりも高画質となる。この画像を用いて障害物を検出する障害物検出処理が行われる。障害物とは、例えば、人や、車や自転車等の移動体、ユーザーUの前方にある物体等を含む。高画質な画像を使用して障害物検出処理が行われることから、正確度が高い障害物検出処理を行うことが可能となる。障害物検出処理の結果、ユーザーUの前方にある障害物が検出された場合は、スマートフォン100におけるアラームの鳴動や振動によって前方に障害物があることがユーザーUに対して報知される。 When it is detected that the user U is in the visible state, the application processor 101A performs obstacle detection processing as an example of a predetermined application. Specifically, as shown in FIG. 10B, an image is captured using the rear camera 13B. For example, when the rear camera 13B is controlled to be off, the rear camera 13B is activated. Then, after the rear camera 13B is set to perform imaging in the normal mode, imaging by the rear camera 13B is started. Since the rear camera 13B takes an image in the normal mode, the image quality of the image obtained by the image pick-up is higher than that of the image obtained by the image pick-up in the visual recognition state detection mode. Obstacle detection processing for detecting obstacles is performed using this image. Obstacles include, for example, people, mobile objects such as cars and bicycles, objects in front of the user U, and the like. Since the obstacle detection process is performed using the high-quality image, it is possible to perform the obstacle detection process with high accuracy. When an obstacle in front of the user U is detected as a result of the obstacle detection process, the user U is notified that there is an obstacle ahead by sounding an alarm or vibrating the smartphone 100 .
 図10Cに示すように、ユーザーUがスマートフォン100を視るのを止めたとする。この場合、視認状態でないことが検出される。視認状態でないことからリアカメラ13Bによる撮像が終了する。なお、例えば、スマートフォン100の電源がオンに制御されている間は、フロントカメラ13Aによる視認状態検出モードでの撮像が継続される。 Assume that the user U stops looking at the smartphone 100 as shown in FIG. 10C. In this case, the non-visible state is detected. Since it is not in the visible state, the imaging by the rear camera 13B ends. Note that, for example, while the smartphone 100 is controlled to be powered on, the front camera 13A continues to capture images in the visual recognition state detection mode.
 図11は、本変形例で行われる処理の流れを示すフローチャートである。ステップST31では、フロントカメラ13Aを使用した視認状態検出モードでの撮像が行われる。視認状態検出モードでは、例えば、画素アレイ部121、システムコントローラ131、センサI/F138、及び、検出部139に対して電力供給がなされる。そして、処理がステップST32に進む。 FIG. 11 is a flowchart showing the flow of processing performed in this modified example. In step ST31, an image is captured in the visual recognition state detection mode using the front camera 13A. In the viewing state detection mode, power is supplied to the pixel array unit 121, the system controller 131, the sensor I/F 138, and the detection unit 139, for example. Then, the process proceeds to step ST32.
 ステップST32では、視認状態検出モードで撮像された画像に基づいて、スマートフォン100のユーザーUが視認状態であるか否かを判断する視認状態検出処理が行われる。例えば、視認状態検出モードで撮像された画像に目が含まれる場合には視認状態であると判断され、視認状態検出モードで撮像された画像に目が含まれない場合には視認状態でない状態(以下、非視認状態と適宜、称する)と判断される。目の領域の有無だけでなく更に視線の方向を検出することにより、視認状態又は非視認状態であるかの判断をより正確に行うことが可能となる。例えば画像に含まれる目の視線がフロントカメラ13Aを向いている場合は視認状態と判断され、画像に含まれる目の視線がフロントカメラ13Aを向いていない場合は非視認状態と判断されるようにしてもよい。なお、機械学習により事前に学習された学習モデルを適用することで視認状態であるか否かが判断されるようにしてもよい。例えば、視認状態の目の形状を学習することで得られる学習モデルを適用することにより視認状態又は非視認状態の判断が行われるようにしてもよい。視認状態検出処理は、本変形例では検出部139が行うが、他の機能ブロック、例えばシステムコントローラ131が行うようにしてもよい。そして、処理がステップST33に進む。 In step ST32, viewing state detection processing is performed to determine whether or not the user U of the smartphone 100 is in the viewing state based on the image captured in the viewing state detection mode. For example, if the image captured in the visible state detection mode includes eyes, it is determined to be in the visible state, and if the image captured in the visible state detection mode does not include eyes, the state is not in the visible state ( hereinafter referred to as a non-visible state as appropriate). By detecting not only the presence or absence of the eye region but also the direction of the line of sight, it is possible to more accurately determine whether the state is the visible state or the non-visible state. For example, when the line of sight of the eye included in the image is directed toward the front camera 13A, it is determined to be in the visible state, and when the line of sight of the eye included in the image is not directed to the front camera 13A, it is determined to be in the non-visible state. may It should be noted that whether or not the object is in the visible state may be determined by applying a learning model learned in advance by machine learning. For example, determination of the visible state or the non-visible state may be performed by applying a learning model obtained by learning the eye shape in the visible state. The viewing state detection process is performed by the detection unit 139 in this modified example, but may be performed by another functional block such as the system controller 131 . Then, the process proceeds to step ST33.
 ステップST33では、視認状態の確信度が閾値以上であるか否かが判断される。係る判断は、例えば検出部139により行われる。例えば、視認状態が数秒程度続く場合や画像に映り込む目の領域が大きい場合は確信度が高くなり、視認状態の継続時間が短い場合や画像に映り込む目の領域が小さい場合は確信度が小さくなる。そして、検出部139からシステムコントローラ131に対して判断結果が通知される。判断結果を受けたシステムコントローラ131は、視認状態の確信度が閾値以上である場合はユーザーUが視認状態である旨をアプリケーションプロセッサ101Aに通知する。また、システムコントローラ131は、視認状態の確信度が閾値未満である場合はユーザーUが非視認状態である旨をアプリケーションプロセッサ101Aに通知する。ここまでの処理(図11において点線で囲う処理)が撮像素子104Bで行われる。ユーザーUが視認状態である場合(Yesの場合)は処理がステップST35に進む。 At step ST33, it is determined whether or not the degree of certainty of the visibility state is equal to or greater than a threshold. Such determination is made by the detection unit 139, for example. For example, if the visible state lasts for several seconds or if the eye area reflected in the image is large, the confidence level is high, and if the visible state lasts for a short time or the eye area reflected in the image is small, the confidence level is high. become smaller. Then, the detection unit 139 notifies the system controller 131 of the determination result. The system controller 131 that has received the determination result notifies the application processor 101A that the user U is in the visible state when the certainty of the visible state is equal to or greater than the threshold. Further, when the certainty of the visible state is less than the threshold, the system controller 131 notifies the application processor 101A that the user U is in the non-visible state. The processing up to this point (the processing enclosed by the dotted line in FIG. 11) is performed by the image sensor 104B. If the user U is in the visible state (Yes), the process proceeds to step ST35.
 ステップST35以降の処理は、アプリケーションプロセッサ101Aが行う障害物検出処理(障害物検出のためのアプリケーション)である。ステップST35では、アプリケーションプロセッサ101Aは、ユーザーUが視認状態であることからディスプレイ12を点灯する。そして、処理がステップST36に進む。 The processing after step ST35 is the obstacle detection processing (application for obstacle detection) performed by the application processor 101A. In step ST35, the application processor 101A turns on the display 12 because the user U is in the viewing state. Then, the process proceeds to step ST36.
 ステップST36では、アプリケーションプロセッサ101Aが、本変形例においてセンサ111を構成する加速度センサを起動する制御を行う。そして、処理がステップST37に進む。 At step ST36, the application processor 101A performs control to activate the acceleration sensor that constitutes the sensor 111 in this modified example. Then, the process proceeds to step ST37.
 ステップST37では、加速度センサによるセンシング結果に基づいて、アプリケーションプロセッサ101Aが、ユーザーUが歩行状態であるか否かを検出する。ユーザーUが歩行中である場合は、処理がステップST38に進む。 At step ST37, the application processor 101A detects whether or not the user U is in a walking state based on the sensing result of the acceleration sensor. If the user U is walking, the process proceeds to step ST38.
 ステップST38では、リアカメラ13Bが起動された後、リアカメラ13Bの通常モードによる撮像が開始される。リアカメラ13Bの通常モードによる撮像により得られた画像がカメラ信号処理部140に供給される。カメラ信号処理部140は、画像に障害物が含まれるか否かを検出する処理を行う。例えば、カメラ信号処理部140は、画像に含まれる道路上の障害物の有無を検出する処理を行う。係る処理によりリアカメラ13Bを使用した障害物の有無に関する監視が開始される。なお、図示はしていないが、例えばカメラ信号処理部140により障害物が検出された場合にはその旨がアプリケーションプロセッサ101Aに通知される。通知を受けたアプリケーションプロセッサ101Aは、音や表示、振動、これらの組み合わせ等を用いて、障害物があることをユーザーUに報知する。 In step ST38, after the rear camera 13B is activated, the normal mode of the rear camera 13B starts to shoot. An image captured by the rear camera 13B in normal mode is supplied to the camera signal processing section 140 . The camera signal processing unit 140 performs processing for detecting whether or not an obstacle is included in the image. For example, the camera signal processing unit 140 performs processing for detecting the presence or absence of an obstacle on the road included in the image. Through such processing, monitoring regarding the presence or absence of an obstacle using the rear camera 13B is started. Although not shown, for example, when an obstacle is detected by the camera signal processing unit 140, the application processor 101A is notified to that effect. The application processor 101A that has received the notification notifies the user U of the presence of the obstacle using sound, display, vibration, a combination thereof, or the like.
 一方、ステップST33の判断結果が非視認状態である場合(Noの場合)は処理がステップST39に進む。ステップST39においてアプリケーションプロセッサ101Aは、ユーザーUが非視認状態であることからディスプレイ12を消灯する。ディスプレイ12が消灯状態であれば、当該消灯状態が継続される。そして、処理がステップST40に進む。 On the other hand, if the determination result of step ST33 is the non-visible state (No), the process proceeds to step ST39. In step ST39, the application processor 101A turns off the display 12 because the user U is in the non-visible state. If the display 12 is in the off state, the off state is continued. Then, the process proceeds to step ST40.
 ステップST40では、ユーザーUが非視認状態であり障害物検出処理を行う必要がないことから、アプリケーションプロセッサ101Aは加速度センサの動作を停止させる制御を行う。加速度センサの動作が停止している場合は、アプリケーションプロセッサ101Aは、加速度センサの停止状態を継続する。そして、処理がステップST41に進む。 In step ST40, since the user U is in the non-visible state and there is no need to perform obstacle detection processing, the application processor 101A performs control to stop the operation of the acceleration sensor. When the operation of the acceleration sensor is stopped, the application processor 101A continues the stopped state of the acceleration sensor. Then, the process proceeds to step ST41.
 ステップST41では、障害物検出処理を行う必要がないことから、リアカメラ13Bによる撮像が終了し、障害物の有無に関する監視が終了する。また、ステップST37の判断結果がNoの場合、すなわち、歩行中でない場合は障害物検出処理を行う必要がないことから、処理がステップST41に進み、リアカメラ13Bによる撮像が終了する。 In step ST41, since there is no need to perform obstacle detection processing, imaging by the rear camera 13B ends, and monitoring regarding the presence or absence of obstacles ends. If the determination result in step ST37 is No, that is, if the vehicle is not walking, there is no need to perform the obstacle detection process, so the process proceeds to step ST41 and the imaging by the rear camera 13B ends.
 なお、上述した処理において、ディスプレイ12に対する表示制御及び加速度センサに対する制御(例えば、ステップST35、ST36)は、順序が入れ替わってもよいし、パラレルに行われてもよい。また、通常モードでの撮像により得られる画像に基づいて障害物を検出する処理は、カメラ信号処理部140ではなくアプリケーションプロセッサ101Aが行ってもよい。 In the above-described processing, the display control for the display 12 and the control for the acceleration sensor (for example, steps ST35 and ST36) may be switched in order or may be performed in parallel. Also, the process of detecting an obstacle based on an image obtained by imaging in the normal mode may be performed by the application processor 101A instead of the camera signal processing section 140. FIG.
 また、ディスプレイ12の表示内容は地図に限らず、通話相手の画像等であってもよい。ユーザーUが急ぎの連絡に関わるスマートフォン100の操作をしながら歩くような場合でも本変形例を適用できる。 Also, the content displayed on the display 12 is not limited to the map, and may be an image of the other party. This modification can be applied even when the user U walks while operating the smartphone 100 related to urgent contact.
 本変形例によれば、視認状態検出処理を低消費電力による撮像結果に基づいて行うようにしている。このため、例えばスマートフォン100の電源がオンされてから視認状態検出処理を常に、若しくは長時間行うようにしても、スマートフォン100における電力消費を抑制できる。また、視認状態検出処理を撮像素子104B内で行っているため、撮像素子104Bとアプリケーションプロセッサ101Aとの間のデータ転送量を削減でき消費電力を低減できる。また、ユーザーUが視認状態であることが検出された場合に、アプリケーションプロセッサ101Aが所定のアプリケーションを実行するため、不必要にアプリケーションが実行されてしまうことを防止できる。また、視認状態検出処理を行うために撮像素子104B以外のセンサを追加する必要がないため、部品点数の減少や製造コストの低減を実現できる。また、ユーザーUが操作を行わなくても障害物を検出するためのアプリケーションが起動及び実行されることから、ユーザーUの利便性を向上させることができる。 According to this modified example, the visual recognition state detection process is performed based on the imaging result with low power consumption. Therefore, for example, power consumption in the smartphone 100 can be suppressed even if the visual recognition state detection process is performed all the time or for a long time after the power of the smartphone 100 is turned on. Further, since the visual recognition state detection processing is performed within the image sensor 104B, the amount of data transferred between the image sensor 104B and the application processor 101A can be reduced, and power consumption can be reduced. In addition, since the application processor 101A executes a predetermined application when it is detected that the user U is in the visible state, unnecessary execution of the application can be prevented. In addition, since it is not necessary to add a sensor other than the image sensor 104B to perform the visual recognition state detection process, it is possible to reduce the number of parts and the manufacturing cost. In addition, since the application for detecting obstacles is activated and executed without the user U performing an operation, convenience for the user U can be improved.
[変形例2]
 図12は、本変形例に係る撮像デバイス(撮像デバイス100A)の外観例を示す図である。撮像デバイス100Aは、例えば、略直方体状の形状を有する。また、撮像デバイス100Aは、一方の主面にディスプレイ12Aを有する。さらに、撮像デバイス100Aは、ディスプレイ12Aと同一面に設けられた第1カメラ13Cと、ディスプレイ12Aと直交する側面に設けられた第2カメラ13Dを有する。第1カメラ13C及び第2カメラ13Dは、撮像素子104Bを有する。撮像デバイス100Aは、適宜な取付部材を用いて車や自転車、オードバイ等の移動体に取付可能な構成を有している。撮像デバイス100Aは、例えば、スマートフォン100と同様の機能を有するが、機能的な差異があっても構わない。
[Modification 2]
FIG. 12 is a diagram showing an appearance example of an imaging device (imaging device 100A) according to this modification. The imaging device 100A has, for example, a substantially rectangular parallelepiped shape. The imaging device 100A also has a display 12A on one main surface. Furthermore, the imaging device 100A has a first camera 13C provided on the same surface as the display 12A, and a second camera 13D provided on a side perpendicular to the display 12A. The first camera 13C and the second camera 13D have an imaging device 104B. The imaging device 100A has a configuration that can be attached to a moving object such as a car, a bicycle, or a motorcycle using an appropriate attachment member. The imaging device 100A has, for example, functions similar to those of the smartphone 100, but may have functional differences.
 図13Aから図13Cは、本変形例の内容を概略的に説明するための図である。図13Aから図13Cに示す例は、撮像デバイス100Aが自転車に取り付けられた例である。撮像デバイス100Aは、例えば、電源がオンされると、第1カメラ13Cを使用した視認状態検出モードによる撮像を開始する。 13A to 13C are diagrams for schematically explaining the contents of this modification. The examples shown in FIGS. 13A to 13C are examples in which the imaging device 100A is attached to a bicycle. For example, when the power is turned on, the imaging device 100A starts imaging in the visual recognition state detection mode using the first camera 13C.
 図13Aに示すように、自転車がストップしている間にユーザーUがディスプレイ12Aを視て自転車の経路を確認したとする。ユーザーUが視認状態であることが、第1カメラ13Cにより得られた画像に基づいて検出される。 As shown in FIG. 13A, it is assumed that the user U looks at the display 12A while the bicycle is stopped and confirms the route of the bicycle. It is detected based on the image obtained by the first camera 13C that the user U is in the visible state.
 そして、図13Bに示すように、ユーザーUが、経路を確認しながら(ディスプレイ12Aを視ながら)、自転車を出発させたとする。ユーザーUが視認状態であり、且つ、ユーザーUが移動したことから、第2カメラ13Dが起動された後、第2カメラ13Dによる通常モードでの撮像が行われる。第2カメラ13Dの撮像により得られた画像に基づいて、上述した障害物検出処理が行われる。ユーザーUの前方に障害物が検出された場合には、その旨がユーザーUに対して報知される。この間も、第1カメラ13Cを使用した視認状態検出モードによる撮像が継続される。 Then, as shown in FIG. 13B, it is assumed that the user U starts the bicycle while confirming the route (while looking at the display 12A). Since the user U is in the visible state and the user U has moved, after the second camera 13D is activated, the second camera 13D takes an image in the normal mode. The above-described obstacle detection processing is performed based on the image obtained by the second camera 13D. When an obstacle is detected in front of the user U, the user U is notified of this fact. During this time as well, imaging in the visual recognition state detection mode using the first camera 13C is continued.
 そして、図13Cに示すように、ユーザーUが前方を視ながら自転車をこぐようになると、視認状態検出モードによる撮像で得られた画像に基づいて、ユーザーUが非視認状態であることが検出される。この場合、ユーザーUがスマートフォン100を視ずに正面を向いていると考えられることから障害物検出処理を行う必要は無いと撮像デバイス100Aは判断する。係る判断に基づいて第2カメラ13Dによる撮像が終了する。なお、再度、ユーザーUが自転車の経路を確認するためにディスプレイ12Aを視ることも想定されることから、第1カメラ13Cによる視認状態検出モードによる撮像及び当該撮像結果を用いた視認状態検出処理は引き続き行われる。 Then, as shown in FIG. 13C, when the user U begins to ride the bicycle while looking ahead, it is detected that the user U is in the non-visible state based on the image obtained by imaging in the visible state detection mode. be. In this case, the imaging device 100A determines that the obstacle detection process does not need to be performed because it is considered that the user U is facing the front without looking at the smartphone 100 . Based on such determination, the imaging by the second camera 13D ends. In addition, since it is assumed that the user U will look at the display 12A again to confirm the route of the bicycle, the first camera 13C captures an image in the visual recognition state detection mode and the visual recognition state detection process using the imaging result. continues to take place.
 図14は、本変形例に係る処理の流れを示すフローチャートである。ステップST41からステップST43までの内容は、フロントカメラ13Aが第1カメラ13Cである点を除いては変形例1におけるステップST31からステップST33までの処理と同じであるため、重複した説明を省略する。 FIG. 14 is a flowchart showing the flow of processing according to this modified example. Since the contents of steps ST41 to ST43 are the same as the processes of steps ST31 to ST33 in Modification 1, except that the front camera 13A is the first camera 13C, redundant description will be omitted.
 ステップST44以降の処理は、アプリケーションプロセッサ101Aが行う障害物検出処理(障害物検出のためのアプリケーション)である。ステップST44では、アプリケーションプロセッサ101Aは、ユーザーUが視認状態であることからディスプレイ12を点灯する。そして、処理がステップST45に進む。 The process after step ST44 is the obstacle detection process (application for obstacle detection) performed by the application processor 101A. In step ST44, the application processor 101A turns on the display 12 because the user U is in the viewing state. Then, the process proceeds to step ST45.
 ステップST45では、アプリケーションプロセッサ101Aが、ユーザーUが乗車している自転車が走行中であるか否かを判断する。例えば、アプリケーションプロセッサ101Aは、走行ログの記録用に用いられる常時起動のGPS(位置センサ110の一例)やスピードセンサのセンシング結果に基づいて、ユーザーUが乗車している自転車が走行中であるか否かを判断する。ユーザーUが乗車している自転車が走行中である場合は、処理がステップST46に進む。 At step ST45, the application processor 101A determines whether or not the bicycle on which the user U is riding is running. For example, the application processor 101A determines whether the bicycle on which the user U is riding is running based on the sensing results of the always-on GPS (an example of the position sensor 110) and the speed sensor used for recording the running log. determine whether or not If the bicycle on which user U is riding is running, the process proceeds to step ST46.
 ステップST46では、第2カメラ13Dが起動された後、第2カメラ13Dの通常モードによる撮像が開始される。第2カメラ13Dの通常モードによる撮像により得られた画像がカメラ信号処理部140に供給される。カメラ信号処理部140は、画像に障害物が含まれるか否かを検出する処理を行う。例えば、カメラ信号処理部140は、画像に含まれる道路上の障害物の有無を検出する処理を行う。係る処理により第2カメラ13Dを使用した障害物に関する監視が開始される。なお、図示はしていないが、例えばカメラ信号処理部140により障害物が検出された場合にはその旨がアプリケーションプロセッサ101Aに通知される。通知を受けたアプリケーションプロセッサ101Aは、音や表示、振動、これらの組み合わせ等を用いて、障害物があることをユーザーUに報知する。 In step ST46, after the second camera 13D is activated, the second camera 13D starts imaging in the normal mode. An image captured by the second camera 13</b>D in the normal mode is supplied to the camera signal processing section 140 . The camera signal processing unit 140 performs processing for detecting whether or not an obstacle is included in the image. For example, the camera signal processing unit 140 performs processing for detecting the presence or absence of an obstacle on the road included in the image. Obstacle monitoring using the second camera 13D is started by such processing. Although not shown, for example, when an obstacle is detected by the camera signal processing unit 140, the application processor 101A is notified to that effect. The application processor 101A that has received the notification notifies the user U of the presence of the obstacle using sound, display, vibration, a combination thereof, or the like.
 一方、ステップST43の判断結果が非視認状態である場合(Noの場合)は処理がステップST47に進む。ステップST47では、アプリケーションプロセッサ101Aは、ユーザーUが非視認状態であることからディスプレイ12を消灯する。ディスプレイ12が消灯状態であれば、当該消灯状態が継続される。そして、処理がステップST48に進む。 On the other hand, if the determination result of step ST43 is the non-visible state (No), the process proceeds to step ST47. In step ST47, the application processor 101A turns off the display 12 because the user U is in the non-visible state. If the display 12 is in the off state, the off state is continued. Then, the process proceeds to step ST48.
 ステップST48では、ユーザーUが非視認状態となったことから、アプリケーションプロセッサ101Aは障害物検出処理を行う必要がないと判断する。係る判断に基づいて、アプリケーションプロセッサ101Aは第2カメラ13Dによる撮像を停止させる制御を行う。また、ステップST45の判断結果がNoの場合、すなわち、走行中でない場合も障害物検出処理を行う必要がないことから、処理がステップST48に進み、第2カメラ13Dによる撮像が終了する。本変形例によっても上述した変形例1と同様の効果を得ることができる。 In step ST48, since the user U has become invisible, the application processor 101A determines that there is no need to perform obstacle detection processing. Based on such determination, the application processor 101A performs control to stop imaging by the second camera 13D. Also, if the determination result in step ST45 is No, that is, if the vehicle is not running, there is no need to perform the obstacle detection process, so the process proceeds to step ST48 and the imaging by the second camera 13D ends. This modification can also provide the same effects as those of modification 1 described above.
 なお、上述した説明では、視認状態検出モードによる撮像が第1カメラ13Cにより行われ、障害物検出処理を行うための画像を得る撮像が第2カメラ13Dにより行われる例を説明したが、使用するカメラが逆であってもよい。 In the above description, the first camera 13C captures an image in the visual recognition state detection mode, and the second camera 13D captures an image for performing the obstacle detection process. The camera may be reversed.
[変形例3]
 図15は、本変形例に係る携帯デバイスの外観例を示す図である。本変形例に係る携帯デバイスは、時計型の携帯デバイス100Bである。携帯デバイス100Bは、ディスプレイ12Cを有し、さらに、ディスプレイ12Cの上部に設けられたカメラ13Fを有する。また、携帯デバイス100Bは、例えばスマートフォン100と同様の機能、構成を有する。勿論、携帯デバイス100Bとスマートフォン100との間に構成上の差異や機能の差異があっても構わない。
[Modification 3]
FIG. 15 is a diagram showing an appearance example of a portable device according to this modification. The portable device according to this modification is a watch-type portable device 100B. The mobile device 100B has a display 12C and a camera 13F provided above the display 12C. Also, the mobile device 100B has the same functions and configuration as the smart phone 100, for example. Of course, there may be structural differences and functional differences between the mobile device 100B and the smartphone 100 .
 図16は、本変形例に係る携帯デバイス100Bで行われる処理の流れを示すフローチャートである。図16における上側の点線で囲まれた処理は撮像素子104Bで行われる処理であり、下側の点線で囲まれた処理はアプリケーションプロセッサ101Aで行われる処理である。他の図についても同様である。ステップST51では、携帯デバイス100Bに対する操作入力が所定期間無い場合に、携帯デバイス100Bが操作状態を受け付けないロック状態となるように制御される。係るロック状態への遷移は、例えば、携帯デバイス100Bの制御部101により行われる。そして、処理がステップST52に進む。 FIG. 16 is a flow chart showing the flow of processing performed by the mobile device 100B according to this modified example. The processing surrounded by the upper dotted line in FIG. 16 is the processing performed by the imaging element 104B, and the processing surrounded by the lower dotted line is the processing performed by the application processor 101A. The same applies to other figures. In step ST51, when there is no operation input to the mobile device 100B for a predetermined period of time, the mobile device 100B is controlled to be in a locked state in which no operation is accepted. The transition to the locked state is performed by the control unit 101 of the mobile device 100B, for example. Then, the process proceeds to step ST52.
 ステップST52からステップST54の内容は、フロントカメラ13Aがカメラ13Fである点を除いては変形例1におけるステップST31からステップST33までの処理と同じであるため、重複した説明を省略する。なお、ステップST54の判断処理がNoである場合には、処理がステップST52に戻る。 The contents of steps ST52 to ST54 are the same as the processes of steps ST31 to ST33 in Modification 1, except that the front camera 13A is the camera 13F, so redundant description will be omitted. In addition, when the determination process of step ST54 is No, a process returns to step ST52.
 ステップST54の判断結果がYesの場合は、処理がステップST55に進む。ステップST55以降の処理は、例えば、アプリケーションプロセッサ101Aにより実行される、カメラ13Fを用いた顔認証を行うアプリケーション(顔認証処理)である。ステップST55では、アプリケーションプロセッサ101Aの制御に応じて、カメラ13Fによる撮像モードが視認状態検出モードから通常モードに遷移される。なお、ステップST54で視認状態が検出された場合に、撮像素子104Bのシステムコントローラ131が、撮像モードを視認状態検出モードから通常モードに自律的に切り替えるようにしてもよい。また、アプリケーションプロセッサ101Aは、ディスプレイ12Cを点灯する。そして、処理がステップST56に進む。 If the determination result of step ST54 is Yes, the process proceeds to step ST55. The processing after step ST55 is, for example, an application (face authentication processing) that performs face authentication using the camera 13F, which is executed by the application processor 101A. In step ST55, the imaging mode of the camera 13F is changed from the visible state detection mode to the normal mode under the control of the application processor 101A. Note that when the visual recognition state is detected in step ST54, the system controller 131 of the imaging device 104B may autonomously switch the imaging mode from the visual recognition state detection mode to the normal mode. Also, the application processor 101A turns on the display 12C. Then, the process proceeds to step ST56.
 ステップST56では、カメラ13Fによる通常モードの撮像により高画質の画像が得られる。当該画像に基づいて、例えばカメラ信号処理部140が顔認証処理を行う。顔認証処理としては公知の処理を適用できる。例えば、機械学習により得られた学習モデルを用いて、画像に含まれる顔が既登録の顔であるか否かを検出する顔認証処理が行われる。カメラ13Fの通常モードでの撮像では高画質の画像が得られるため、顔認証を正確に行うことができる。ステップST56に係る処理で顔認証が成立の場合(Yesの場合)には処理がステップST57に進む。 At step ST56, a high-quality image is obtained by normal mode imaging by the camera 13F. Based on the image, for example, the camera signal processing unit 140 performs face authentication processing. A known process can be applied as the face authentication process. For example, a learning model obtained by machine learning is used to perform face authentication processing for detecting whether or not a face included in an image is a registered face. Since a high-quality image can be obtained by imaging in the normal mode of the camera 13F, face authentication can be performed accurately. If face authentication is successful in the process of step ST56 (Yes), the process proceeds to step ST57.
 ステップST57では、認証が成立したことから、携帯デバイス100Bのロック状態が解除される。そして、処理がステップST58に進み、処理が終了する。ステップT56に係る処理で顔認証が不成立の場合(Noの場合)には処理がステップST52に戻り、ステップST52以降の処理が繰り返される。 In step ST57, the mobile device 100B is unlocked because the authentication is successful. Then, the process proceeds to step ST58 and the process ends. If the face authentication is not established in the process of step T56 (if No), the process returns to step ST52, and the processes after step ST52 are repeated.
 例えば、カメラ13Fで得られる画像に基づいて被写体の動きが検出された場合に顔認証処理が行われてしまうと、ユーザーU以外の人物の動きであっても顔認証処理が行われることになり、電力が無駄に消費されてしまう虞がある。また、カメラ13Fで得られる画像に基づいて人物を検出し、人物の検出をトリガーとして顔認証処理が行われると、ユーザーUが携帯デバイス100Bを視ていない場合にも顔認証処理が行われ電力が無駄に消費されてしまう虞がある。本変形例では、ユーザーUが携帯デバイス100Bに注目している状態、すなわち視認状態である場合をトリガーとして顔認証処理が行われるので、不必要に顔認証処理が行われることを抑制できる。これにより携帯デバイス100Bの電力が消費されてしまうことを抑制できる。また、ユーザーUが操作を行わなくても顔認証を行うためのアプリケーションが起動及び実行されることから、ユーザーUの利便性を向上させることができる。 For example, if the face authentication process is performed when the movement of the subject is detected based on the image obtained by the camera 13F, the face authentication process will be performed even if the movement of a person other than the user U is detected. , power may be wasted. Further, when a person is detected based on an image obtained by the camera 13F and face authentication processing is performed with the detection of the person as a trigger, the face authentication processing is performed even when the user U is not looking at the mobile device 100B, and power consumption is reduced. is wasted. In this modified example, face authentication processing is performed with the user U looking at the portable device 100B, that is, in the visible state, as a trigger, so unnecessary face authentication processing can be suppressed. As a result, power consumption of the portable device 100B can be suppressed. In addition, since the application for performing face authentication is started and executed without the user U performing an operation, convenience for the user U can be improved.
 また、ジャイロセンサによって手首の捻りや携帯デバイス100Bの姿勢を検出し、検出結果に基づいて、ディスプレイを点灯させる制御も考えられる。しかしながら、ユーザーUが寝ている姿勢等の場合には携帯デバイス100Bの姿勢が通常と異なるため、ディスプレイが点灯しない虞がある。しかしながら、本変形例に係る視認状態検出処理は携帯デバイス100Bの姿勢に影響されないため、係る不都合が生じてしまうことを防止できる。 It is also conceivable to detect the twist of the wrist and the orientation of the portable device 100B with a gyro sensor, and control to turn on the display based on the detection results. However, when the user U is in a sleeping posture or the like, the posture of the mobile device 100B is different from the normal posture, so there is a possibility that the display may not turn on. However, since the visual recognition state detection process according to this modification is not affected by the attitude of the portable device 100B, it is possible to prevent such inconvenience from occurring.
 なお、本変形例では携帯デバイス100Bとして時計型の携帯デバイスを例にして説明したが、スマートフォン等の電子機器に対しても本変形例を適用できる。 In this modified example, a watch-shaped portable device has been described as an example of the portable device 100B, but this modified example can also be applied to electronic devices such as smartphones.
[変形例4]
 図17Aから図17Dは、本変形例の内容を説明するための図である。図17Aから図17Dまでに示すように、本変形例に係る電子機器は、テレビジョン装置100Cである。テレビジョン装置100Cは、例えば、ディスプレイ12Dを有し、さらに、ディスプレイ12Dを囲うフレームの所定箇所(例えば、上部中央付近)に設けられたカメラ13Gを有する。また、テレビジョン装置100Cは、公知のテレビジョン装置と同様の内部構成を有する。カメラ13Gは、撮像素子104Bを有し、視認状態検出処理を実行可能とされている。例えば、テレビジョン装置100Cのメイン電源をオンにする制御がなされた場合(例えば、テレビジョン装置100Cが商用電源に接続された場合)に、カメラ13Gを用いた視認状態検出処理が開始される。
[Modification 4]
17A to 17D are diagrams for explaining the content of this modification. As shown in FIGS. 17A to 17D, the electronic device according to this modification is a television device 100C. The television device 100C has, for example, a display 12D, and further has a camera 13G provided at a predetermined position (for example, near the upper center) of a frame surrounding the display 12D. Also, the television device 100C has an internal configuration similar to that of a known television device. The camera 13G has an imaging device 104B and is capable of executing viewing state detection processing. For example, when control is performed to turn on the main power supply of the television device 100C (for example, when the television device 100C is connected to a commercial power supply), visual recognition state detection processing using the camera 13G is started.
 例えば、図17A及び図17Dに示すように、ユーザーUがテレビジョン装置100Cを視ている視認状態である場合は、変形例1等と同様の視認状態検出処理によりユーザーUの視認状態が検出される。視認状態が検出されると例えばテレビジョン装置100Cがオンに制御される。その後、カメラ13Gの撮像モードが視認状態検出モードから通常モードに遷移する。係る遷移の制御は、撮像素子104Bが自律的に行ってもよいし、テレビジョン装置100Cの制御部の制御に応じて行われてもよい。そして、テレビジョン装置100Cのアプリケーションプロセッサ101Aが、通常モードによる撮像で得られた画像に基づいて、ユーザーUの顔を認証する処理を行う。通常モードによる撮像で得られた高画質の画像を用いることから、顔認証処理を正確に行うことができる。 For example, as shown in FIGS. 17A and 17D, when the user U is viewing the television device 100C, the viewing state of the user U is detected by the same viewing state detection processing as in the first modification. be. When the viewing state is detected, for example, the television device 100C is turned on. After that, the imaging mode of the camera 13G transitions from the visual recognition state detection mode to the normal mode. Such transition control may be performed autonomously by the imaging element 104B, or may be performed under the control of the control unit of the television apparatus 100C. Then, the application processor 101A of the television device 100C performs a process of authenticating the face of the user U based on the image obtained by imaging in the normal mode. Since a high-quality image obtained by imaging in the normal mode is used, face authentication processing can be performed accurately.
 顔認証処理は、例えば、ユーザーUの顔と登録済みユーザーの顔とを照合する処理を含む。ユーザーUが登録済みのユーザーである場合には、テレビジョン装置100Cの設定が当該登録済みのユーザーに対応する設定となるように制御される。そして、当該設定に応じてテレビジョン放送やネットワーク経由のコンテンツが再生される。ユーザーUが登録済みのユーザーでない場合には、デフォルトの設定でテレビジョン放送やネットワーク経由のコンテンツが再生される。テレビジョン装置100Cの設定としては、音量、明るさ、輝度、色調等を挙げることができる。なお、テレビジョン装置100Cの設定処理が終了した後は、例えばカメラ13Gの撮像モードが通常モードから視認状態検出モードに切り替わり、視認状態検出モードでの撮像が行われる。当該撮像により得られた画像を用いた視認状態検出処理で、ユーザーUが非視認状態であること(例えば、非視認状態が一定時間継続した場合)にはテレビジョン装置100Cの電源がオフに制御される。 Face authentication processing includes, for example, processing for matching the face of user U with the face of a registered user. If the user U is a registered user, the settings of the television device 100C are controlled to correspond to the registered user. Then, according to the settings, the content via the television broadcast or the network is reproduced. If the user U is not a registered user, default settings are used to reproduce television broadcasts and content via networks. Settings of the television apparatus 100C include volume, brightness, luminance, color tone, and the like. Note that after the setting process of the television apparatus 100C is completed, for example, the imaging mode of the camera 13G is switched from the normal mode to the visual recognition state detection mode, and imaging is performed in the visual recognition state detection mode. In the visible state detection process using the image obtained by the imaging, if the user U is in the non-visible state (for example, if the non-visible state continues for a certain period of time), the power of the television device 100C is controlled to be turned off. be done.
 人感センサによって人の有無を検出し、検出結果に応じてテレビジョン装置100Cの電源のオン/オフを制御する方法も考えられる。しかしながら、人感センサを用いた方法では、図17Bや図17Cに示すように、ユーザーUがテレビジョン装置100Cを視ていない場合であっても、テレビジョン装置100Cの電源がオンに制御され、電力が無駄に消費される虞がある。本変形例によれば、ユーザーUがテレビジョン装置100Cを視ていない非視認状態の場合にはテレビジョン装置100Cの電源がオンに制御されないので、上述した不都合が発生することを防止できる。 A method is also conceivable in which the presence or absence of a person is detected by a human sensor, and the power of the television device 100C is controlled on/off according to the detection result. However, in the method using the human sensor, as shown in FIGS. 17B and 17C, even when the user U is not watching the television device 100C, the power of the television device 100C is controlled to be turned on. Electric power may be wasted. According to this modification, when the user U is not viewing the television device 100C and is in a non-visible state, the power of the television device 100C is not controlled to be turned on, thereby preventing the above-described inconvenience from occurring.
[変形例5]
 近年、屋外や店舗・駅や公園等の公共空間(以下、公共空間等と適宜、総称する)では、大型のディスプレイを用い、当該ディスプレイを用いた情報発信が行われている。このような情報発信は、デジタルサイネージと称される。本変形例は、本開示をデジタルサイネージに適用した例である。
[Modification 5]
In recent years, in public spaces such as outdoors, shops, stations, and parks (hereinafter collectively referred to as public spaces, etc.), large displays are used to transmit information. Such information transmission is called digital signage. This modification is an example in which the present disclosure is applied to digital signage.
 図18に示すように、公共空間等に比較的、大型の表示デバイス100Dが設置されている。表示デバイス100Dは、ディスプレイ12Hを有する。また、表示デバイス100Dは、上部フレームの略中央に設けられたカメラ13Hを有する。表示デバイス100Dは、例えば、スマートフォン100と同様の機能を有する。 As shown in FIG. 18, a relatively large display device 100D is installed in a public space or the like. The display device 100D has a display 12H. The display device 100D also has a camera 13H provided substantially in the center of the upper frame. The display device 100D has functions similar to those of the smart phone 100, for example.
 例えば表示デバイス100Dが電源に接続されると、カメラ13Hが視認状態検出モードで撮像を行う。撮像により得られる画像に基づいて、視認状態検出処理が表示デバイス100Dで行われる。検出結果に応じたコンテンツ再生制御が行われる。 For example, when the display device 100D is connected to the power supply, the camera 13H takes an image in the viewing state detection mode. Visual recognition state detection processing is performed in the display device 100D based on an image obtained by imaging. Content reproduction control is performed according to the detection result.
 図19に示すように、視認状態検出処理の結果、視認状態が検出されない場合、すなわち、ディスプレイ12Hを視ているユーザーがいない場合には、ディスプレイ12Hの画面がオフに制御される。ディスプレイ12Hの画面がオフに制御されるのではなく、ディスプレイ12Hに待機用のコンテンツが再生されてもよい。視認状態検出処理の結果、視認状態が検出された場合、すなわち、ディスプレイ12Hを視ているユーザーがいる場合には、メインコンテンツがディスプレイ12Hに再生される。メインコンテンツは、例えば予め設定されたコンテンツである。メインコンテンツがディスプレイ12Hに再生されている間も視認状態検出処理が行われる。メインコンテンツの再生中に視認状態が検出されなくなった場合には、それまでメインコンテンツを視ていたユーザーUがディスプレイ12Hを視なくなったことから、ディスプレイ12Hをオフにする制御が行われる。ディスプレイ12Hをオフに制御するのではなく、待機用のコンテンツが再生されてもよい。なお、メインコンテンツの再生が終了した場合にもディスプレイ12Hをオフにする制御が行われる。ディスプレイ12Hをオフにするのではなく、待機用のコンテンツが再生されてもよい。 As shown in FIG. 19, when the viewing state is not detected as a result of the viewing state detection process, that is, when there is no user looking at the display 12H, the screen of the display 12H is controlled to be turned off. The standby content may be reproduced on the display 12H instead of turning off the screen of the display 12H. When the viewing state is detected as a result of the viewing state detection process, that is, when there is a user viewing the display 12H, the main content is reproduced on the display 12H. The main content is, for example, preset content. The viewing state detection process is also performed while the main content is being reproduced on the display 12H. When the viewing state is no longer detected during playback of the main content, the user U who has been viewing the main content has stopped viewing the display 12H, so control is performed to turn off the display 12H. Instead of turning off the display 12H, standby content may be played. Note that the display 12H is turned off even when the reproduction of the main content is finished. The standby content may be played instead of turning off the display 12H.
 上述した例はカメラ13Hの動作モードが変わらない例であったが、本変形例においてカメラ13Hの動作モードが、視認状態検出モードから通常モードに遷移してもよい。 Although the above example is an example in which the operation mode of the camera 13H does not change, in this modified example, the operation mode of the camera 13H may transition from the visual recognition state detection mode to the normal mode.
 例えば、図20のフローチャートに示すように、ステップST61からステップST63に係る処理により視認状態検出処理が行われる。視認状態検出処理の内容は、フロントカメラ13Aがカメラ13Hである点を除いては変形例1におけるステップST31からステップST33までの処理と同じであるため、重複した説明を省略する。なお、ステップST63の判断処理がNoである場合には、処理がステップST61に戻る。 For example, as shown in the flowchart of FIG. 20, the visual recognition state detection process is performed by the process from step ST61 to step ST63. The content of the viewing state detection process is the same as the process from step ST31 to step ST33 in Modification 1, except that the camera 13H is used instead of the front camera 13A. In addition, when the determination process of step ST63 is No, a process returns to step ST61.
 ステップST63の判断がYesの場合、すなわち、視認状態が検出された場合には、カメラ13Hのモードが視認状態検出モードから通常モードに遷移する。そして、処理がステップST64に進む。ステップST64では、アプリケーションプロセッサ101Aが、ディスプレイ12Hを視認(閲覧)しているユーザーUの属性を認識するアプリケーションを行う。例えば、アプリケーションプロセッサ101Aが、カメラ13Hの通常モードによる撮像により得られた画像に基づいて、ユーザーUの属性を認識する属性認識処理を行う。属性認識処理の方法は特定の方法に限定されるものではないが、例えば、DNNにより得られた学習モデルを適用する方法が挙げられる。ユーザーUの属性が認識された後に、処理がステップST65に進む。 When the determination in step ST63 is Yes, that is, when the visual recognition state is detected, the mode of the camera 13H transitions from the visual recognition state detection mode to the normal mode. Then, the process proceeds to step ST64. In step ST64, the application processor 101A performs an application for recognizing attributes of the user U viewing (browsing) the display 12H. For example, the application processor 101A performs attribute recognition processing for recognizing attributes of the user U based on an image captured by the camera 13H in normal mode. Although the method of attribute recognition processing is not limited to a specific method, for example, there is a method of applying a learning model obtained by DNN. After the attribute of user U is recognized, the process proceeds to step ST65.
 ステップST65では、ユーザーUの属性を記録する属性記録処理が行われる。例えば、ユーザーUの属性は、表示デバイス100D内の適宜なメモリに記録される。ユーザーUの属性としては、ユーザーU自身の属性である性別、年齢等の他、メインコンテンツを視た時間が挙げられる。属性に、特定のユーザーUに限定されない属性(例えば、メインコンテンツが視られた回数等)が含まれてもよい。なお、属性認識処理により得られた属性は記録されることなく、例えばネットワーク上のサーバーに送信されてもよい。また、属性認識処理だけでなく、認識された属性に対応する制御が、アプリケーションプロセッサ101Aにより実行されるアプリケーションに含まれてもよい。より具体的には、認識された属性に応じたコンテンツ再生制御や、コンテンツ再生制御に係る設定処理が含まれてもよい。例えば、属性としてユーザーUの年齢が認識された場合には、年齢に応じたメインコンテンツが再生されるようにしてもよい。例えば、認識されたユーザーUの年齢が小さい場合にはディスプレイ12Hにアニメが表示される。また、認識されたユーザーUの年齢が大きい場合には、コンテンツ再生制御に係る設定処理の一つである音量を大きくする制御が行われる。この他にも、ディスプレイ12Hの輝度や色調、明るさ、文字の大きさ等が、認識されたユーザーUの属性に応じて調整されてもよい。 In step ST65, an attribute recording process for recording user U's attributes is performed. For example, the attributes of user U are recorded in an appropriate memory within display device 100D. The attributes of the user U include gender, age, etc., which are the attributes of the user U themselves, and the time spent viewing the main content. The attributes may include attributes that are not limited to a specific user U (for example, the number of times the main content has been viewed, etc.). Note that the attributes obtained by the attribute recognition process may be transmitted to, for example, a server on the network without being recorded. In addition to the attribute recognition process, control corresponding to recognized attributes may be included in the application executed by the application processor 101A. More specifically, content reproduction control according to recognized attributes and setting processing related to content reproduction control may be included. For example, when the age of the user U is recognized as an attribute, the age-appropriate main content may be reproduced. For example, when the age of the recognized user U is small, an animation is displayed on the display 12H. Further, when the age of the recognized user U is large, control is performed to increase the volume, which is one of the setting processes related to content reproduction control. In addition, the luminance, color tone, brightness, character size, etc. of the display 12H may be adjusted according to the attribute of the user U recognized.
 本変形例によれば、属性認識処理を行うことにより、ディスプレイ12Hに表示されるコンテンツ(例えば広告)を視るユーザーの年齢層や、関心度を取得することができ、これによりコンテンツ再生に伴う効果を分析できるようになる。属性認識処理では所定以上の電力が消費されるが、本変形例では、視認状態が検出された場合に属性認識処理を行うので、表示デバイス100Dにおける電力の消費を抑制できる。 According to this modification, by performing the attribute recognition process, it is possible to acquire the age group and degree of interest of the user viewing the content (for example, an advertisement) displayed on the display 12H. be able to analyze the effects. Although the attribute recognition process consumes a predetermined amount of power or more, in the present modification, the attribute recognition process is performed when the visible state is detected, so power consumption in the display device 100D can be suppressed.
 デジタルサイネージではなく、紙媒体、例えば掲示板への掲示物に対しても本変形例を適用可能である。図21Aに示すように、例えば、掲示板71に対して掲示物が掲示される。掲示板71の上部枠部の中央付近に撮像デバイス100Iが取り付けられている。図21Bに示すように、撮像デバイス100Iは、例えば、箱形の形状を有し、所定の一面にカメラ13Iを有している。撮像デバイス100Iは、画鋲等を用いて、掲示板71の適宜な箇所に取り付け可能なデバイスであり、例えばスマートフォン100と同様の機能を有する。 Instead of digital signage, this modification can also be applied to paper media, such as postings on bulletin boards. As shown in FIG. 21A, for example, notices are posted on a bulletin board 71 . An imaging device 100I is attached near the center of the upper frame of the bulletin board 71 . As shown in FIG. 21B, the imaging device 100I has, for example, a box shape and has a camera 13I on one predetermined surface. The imaging device 100I is a device that can be attached to an appropriate location on the bulletin board 71 using a thumbtack or the like, and has functions similar to those of the smartphone 100, for example.
 カメラ13Iによる視認状態検出モードでの撮像により得られた画像に基づいて、視認状態検出処理が行われる。視認状態検出処理の結果、視認状態が検出された場合には、カメラ13Iの撮像モードが視認状態検出モードから通常モードに遷移する。通常モードでの撮像により得られた画像に基づいて、例えば、上述した属性認識処理が行われる。属性認識処理の結果、例えば掲示物を視たユーザーの属性や視られた回数等が得られる。得られた情報を使用して、掲示物による効果や掲示物に対する関心度を分析することができる。 Visual recognition state detection processing is performed based on an image captured by the camera 13I in the recognition state detection mode. When the visual recognition state is detected as a result of the visual recognition state detection process, the imaging mode of the camera 13I transitions from the visual recognition state detection mode to the normal mode. For example, the attribute recognition process described above is performed based on the image obtained by imaging in the normal mode. As a result of the attribute recognition processing, for example, the attribute of the user who viewed the notice, the number of times the notice was viewed, and the like are obtained. The information obtained can be used to analyze the effectiveness of the post and the degree of interest in the post.
 また、上述した説明における掲示物は、展示物であってもよい。例えば、図22に示すように、展示物の一例であるロボット72が、展示用デバイス100Jの上に置かれる。展示用デバイス100Jは、柱状部81及びシート部82を有する。シート部82は、例えば、柱状部81に巻き付けることができる。柱状部81の略中央にカメラ13Jが設けられている。 Also, the postings in the above explanation may be exhibits. For example, as shown in FIG. 22, a robot 72, which is an example of an exhibit, is placed on the exhibit device 100J. The display device 100J has a columnar portion 81 and a sheet portion 82 . The sheet portion 82 can be wound around the columnar portion 81, for example. A camera 13J is provided substantially in the center of the columnar portion 81 .
 カメラ13Jによる視認状態検出モードでの撮像により得られた画像に基づいて、視認状態検出処理が行われる。視認状態検出処理の結果、視認状態が検出された場合には、カメラ13Jの撮像モードが視認状態検出モードから通常モードに遷移する。通常モードでの撮像により得られた画像に基づいて、例えば、上述した属性認識処理が行われる。属性認識処理の結果、例えば展示物を視たユーザーの属性や視られた回数等の関心度が得られる。得られた情報を使用して、どのようなユーザーがどのような展示物に興味を持っているか等の情報を分析することができる。 Visual recognition state detection processing is performed based on an image captured by the camera 13J in the recognition state detection mode. When the visual recognition state is detected as a result of the visual recognition state detection process, the imaging mode of the camera 13J transitions from the visual recognition state detection mode to the normal mode. For example, the attribute recognition process described above is performed based on the image obtained by imaging in the normal mode. As a result of the attribute recognition processing, for example, the interest level such as the attribute of the user who viewed the exhibit and the number of times the exhibit was viewed can be obtained. Using the obtained information, it is possible to analyze information such as what kind of users are interested in what kinds of exhibits.
[その他の変形例]
 アプリケーションプロセッサ101Aにより実行されるアプリケーションは、人体の一部及び所定のオブジェクトに対応し、且つ、撮像素子104Bにより得られる画像データに基づく情報及び撮像素子104Bとは異なるセンサ(例えば、位置センサ110やセンサ111)から得られる情報の少なくとも一方に基づいて決定されてもよい。撮像素子104Bにより得られる画像データに基づく情報としては、例えば、動きベクトルの情報が挙げられる。位置センサ110及びセンサ111から得られる情報としては、例えば、位置情報、スマートフォン100の傾きに関する情報、照度に関する情報、及び、時刻情報の少なくとも一つを含む。
[Other Modifications]
The application executed by the application processor 101A corresponds to a part of the human body and a predetermined object, and includes information based on image data obtained by the imaging element 104B and sensors different from the imaging element 104B (for example, the position sensor 110 and the like). It may be determined based on at least one of the information obtained from the sensor 111). Information based on the image data obtained by the imaging device 104B includes, for example, motion vector information. The information obtained from the position sensor 110 and the sensor 111 includes, for example, at least one of position information, information about tilt of the smartphone 100, information about illuminance, and time information.
 あるアプリケーションと他のアプリケーションを起動させるための、人体の一部と所定のオブジェクトとの組み合わせが、似たものとなる場合がある。例えば、一実施形態で説明したように、指の本数に応じて、アラームが設定される場合もあれば、スマートフォン100の動作モードが設定される場合もある。この場合、センサ111から得られる情報をさらに参照することで、ユーザーの意図に沿うアプリケーションを起動させることができる。例えば、照度が小さい場合や時刻情報が夜間の場合、位置情報が自宅を示す場合、スマートフォン100が置かれた状態の場合等に指のジェスチャーがなされた場合は、ユーザーが起床するためのアラームを設定する可能性が高い。従って、起動させるべきアプリケーションがアラームと決定される。また、照度が大きい場合や時刻情報が昼間の場合、位置情報が会社を示す場合、スマートフォン100が持たれている状態の場合等に指のジェスチャーがなされた場合は、ユーザーが仕事でスマートフォン100を使用している可能性が高い。従って、起動させるべきアプリケーションがスマートフォン100のモード設定制御と決定される。係る判断は、例えば、センサ111と接続されるアプリケーションプロセッサ101Aによって行われる。また、薬の錠剤が検出され、さらに、動きベクトルにより薬の位置が前後や上下に移動したことが検出された場合、換言すれば、薬を服用する動作が検出された場合に、薬の服用履歴を登録するアプリケーションが起動されるようにしてもよい。さらに位置情報に基づいて、薬の服用場所が登録されるようにしてもよい。 A combination of a part of the human body and a predetermined object for starting one application and another application may be similar. For example, as described in one embodiment, depending on the number of fingers, an alarm may be set, or the operation mode of the smartphone 100 may be set. In this case, by further referring to the information obtained from the sensor 111, it is possible to activate an application that meets the user's intention. For example, when the illuminance is low, when the time information is at night, when the location information indicates home, when the smartphone 100 is placed, and when a finger gesture is made, an alarm for the user to wake up is generated. likely to set. Therefore, the application to be activated is determined as an alarm. Further, when the illumination is high, when the time information is in the daytime, when the location information indicates a company, and when the smartphone 100 is being held, and a finger gesture is made, the user does not use the smartphone 100 at work. likely to be in use. Therefore, the application to be activated is determined as the mode setting control of the smart phone 100 . Such determination is made by the application processor 101A connected to the sensor 111, for example. Further, when a medicine tablet is detected, and further, when the movement vector detects that the position of the medicine has moved back and forth or up and down, in other words, when an action of taking medicine is detected, the ingestion of the medicine is detected. An application for registering the history may be activated. Further, based on the location information, the place of taking medicine may be registered.
 起動対象のアプリケーションが一意に定まらず、複数の候補がある場合は、ユーザーに対して選択を促す画面がディスプレイ12に表示されてもよい。また、自動でアプリケーションが起動された場合に、当該アプリケーションがユーザーの意図と異なる場合もあり得る。このような場合に、起動されたアプリケーションのキャンセルをできるようにしてもよい。キャンセルは、音声入力によりできるようにしてもよいし、キャンセル用のジェスチャーによりできるようにしてもよい。キャンセル用のジェスチャーが、検出モードによる撮像により検出されるようにしてもよい。また、自動でアプリケーションが起動された場合に、当該アプリケーションがユーザーの意図に沿うアプリケーションであるか否かを確認する報知がなされてもよい。 If the application to be activated is not uniquely determined and there are multiple candidates, a screen prompting the user to make a selection may be displayed on the display 12 . Moreover, when an application is automatically started, the application may differ from the user's intention. In such a case, the launched application may be canceled. Cancellation may be performed by voice input, or may be performed by gesture for cancellation. A gesture for cancellation may be detected by imaging in the detection mode. Further, when an application is automatically activated, a notification may be made to confirm whether or not the application meets the user's intention.
 アプリケーションを実行する際に追加の動作を要求する報知が行われてもよい。追加の動作の要求は、例えば「登録をしてよい場合は、確認ボタンをタッチしてください」のようなメッセージである。係る追加の動作の要求は、ディスプレイ12に表示されてもよいし、音声や振動によって報知されてもよい。追加の動作は、時間方向の動きであってもよいし、空中に所定のパターンを描く動作であってもよいし、指紋認証や声紋認証、パスワード等の認証を行うための動作であってもよい。追加の動作は、ユーザーのみが知る動作であることがセキュリティの観点から好ましい。時間方向の動きとしては、例えば、時間の経過に応じて変化させるジェスチャー(例えば、じゃんけんのグー、チョキ、パー)や、所定のオブジェクトに対応する物体(例えば、薬)を上下方向や遠近方向に動かす操作であってもよい。 A notification requesting an additional action may be made when executing the application. The request for additional action is, for example, a message such as "If you are allowed to register, please touch the confirm button." Such additional action request may be displayed on the display 12, or may be notified by sound or vibration. The additional action may be a movement in the direction of time, an action of drawing a predetermined pattern in the air, or an action for fingerprint authentication, voiceprint authentication, password authentication, etc. good. From a security point of view, the additional operation is preferably an operation known only to the user. Examples of motions in the time direction include gestures that change over time (e.g. rock, paper, scissors, paper), and objects corresponding to predetermined objects (e.g. medicine) moving vertically or in perspective. It may be an operation to move.
 上述した一実施形態において、検出部139やシステムコントローラ131が行う処理の一部が、撮像素子104Bの外部で行われてもよい。例えば、検出部139やシステムコントローラ131が行う処理の一部を、制御部101やアプリケーションプロセッサ101Aが行うようにしてもよい。このように、どの処理をどの機能ブロックが担うかについては適宜、変更できる。 In the embodiment described above, part of the processing performed by the detection unit 139 and the system controller 131 may be performed outside the imaging device 104B. For example, part of the processing performed by the detection unit 139 and the system controller 131 may be performed by the control unit 101 and the application processor 101A. In this way, it is possible to appropriately change which processing is performed by which functional block.
 認識モードで撮影された人体の一部の詳細が、システムコントローラ131によって認識されてもよい。人体の一部の詳細としては、顔の表情や手や顔の皺が挙げられる。これらを考慮して、起動されるアプリケーションが決定されてもよい。 Details of a part of the human body photographed in recognition mode may be recognized by the system controller 131 . Details of parts of the human body include facial expressions and wrinkles on the hands and face. An application to be activated may be determined in consideration of these.
 上述した一実施形態では、撮像素子104Bが、動体検出モード、検出モード、認識モード、及び、通常モードの何れかで撮像を行う例について説明したが、これに限定されることはない。例えば、動体検出モードがなく、検出モードで得られる画像データに基づいて、動体の検出が行われてもよい。また、検出モードによる画像データに基づいて、起動すべきアプリケーションが判断されてもよい。 In the above-described embodiment, an example in which the imaging element 104B performs imaging in any one of the moving object detection mode, detection mode, recognition mode, and normal mode has been described, but the present invention is not limited to this. For example, moving object detection may be performed based on image data obtained in the detection mode without the moving object detection mode. Also, the application to be activated may be determined based on the image data in the detection mode.
 本開示に係る処理は制御方法や当該制御方法をコンピュータに実行させるプログラムとして構成できる。プログラムは、サーバー等を介して配布され、スマートフォン等の電子機器にインストールされ得る。一実施形態の内容及び変形例で説明した内容は、本開示の主旨を逸脱しない範囲で適宜、組み合わせることができる。 The processing according to the present disclosure can be configured as a control method or a program that causes a computer to execute the control method. The program can be distributed via a server or the like and installed in an electronic device such as a smart phone. The content of one embodiment and the content described in the modification can be appropriately combined without departing from the gist of the present disclosure.
 なお、本明細書に記載された効果はあくまで例示であって、限定されるものではなく、また、他の効果があってもよい。 It should be noted that the effects described in this specification are only examples and are not limited, and other effects may also occur.
 なお、本開示は以下のような構成もとることができる。
(1)
 第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行う撮像素子と、
 前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行するアプリケーションプロセッサと
 を有する電子機器。
(2)
 前記所定のオブジェクトは、前記人体の一部と異なる人体の一部を含む
 (1)に記載の電子機器。
(3)
 前記アプリケーションプロセッサは、前記人体の一部が前記所定のオブジェクトに対する操作指示と認識し、前記アプリケーションを認識結果に応じて実行する
 (1)又は(2)に記載の電子機器。
(4)
 前記所定のオブジェクトに対して前記撮像素子による前記第1のモードでの撮像が行われる
 (1)から(3)までの何れかに記載の電子機器。
(5)
 前記アプリケーションは、前記所定のオブジェクトに関連する情報を登録するアプリケーションである
 (1)から(4)までの何れかに記載の電子機器。
(6)
 前記所定のオブジェクトに関連する情報は、当該所定のオブジェクトの使用履歴に関する情報である
 (5)に記載の電子機器。
(7)
 前記所定のオブジェクトは、前記人体の一部によって保持されるオブジェクトである
 (5)又は(6)に記載の電子機器。
(8)
 前記所定のオブジェクトは、前記人体の一部とは異なる人体の一部であり、前記アプリケーションプロセッサは、前記人体の一部の組み合わせに対応するアプリケーションを実行する
 (1)に記載の電子機器。
(9)
 前記人体の一部及び前記異なる人体の一部のそれぞれは、右手の一部と左手の一部である
 (8)に記載の電子機器。
(10)
 前記人体の一部及び所定のオブジェクトに対応し、且つ、前記撮像素子により得られる画像データに基づく情報及び前記撮像素子とは異なるセンサから得られる情報の少なくとも一方に基づいて決定されたアプリケーションが実行される
 (1)から(9)までの何れかに記載の電子機器。
(11)
 前記センサから得られる情報は、前記電子機器の傾きに関する情報、照度に関する情報、位置情報、及び、時刻情報の少なくとも一つを含む
 (10)に記載の電子機器。
(12)
 前記アプリケーションを実行する際に追加の動作を要求する報知が行われる
 (1)から(11)までの何れかに記載の電子機器。
(13)
 ディスプレイを有し、
 前記追加の動作の要求が、前記ディスプレイに表示される
 (12)に記載の電子機器。
(14)
 前記追加の動作は、時間方向の動きである
 (12)又は(13)に記載の電子機器。
(15)
 前記追加の動作が認証を行うための動作である
 (12)から(14)までの何れかに記載の電子機器。
(16)
 多波長センシングにより前記所定のオブジェクトの識別が行われる
 (1)から(15)までの何れかに記載の電子機器。
(17)
 前記第2のモードの撮像に関する撮像パラメータを、前記第1のモードの撮像に関する撮像パラメータよりも小さくすることにより、前記第2のモードの消費電力が前記第1のモードの消費電力よりも小さくなる
 (1)から(16)までの何れかに記載の電子機器。
(18)
 前記撮像パラメータには、解像度、色の階調、撮像領域、フレームレートの少なくとも一つが含まれる
 (17)に記載の電子機器。
(19)
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
 アプリケーションプロセッサが、前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
 制御方法。
(20)
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
 アプリケーションプロセッサが、前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
 制御方法をコンピュータに実行させるプログラム。
(21)
 第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出する撮像素子と、
 前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行するアプリケーションプロセッサと
 を有する電子機器。
(22)
 前記検出の結果が前記視認状態である場合に、前記撮像素子の動作モードが前記第1のモードに変更される制御が行われる
 (21)に記載の電子機器。
(23)
 前記所定のアプリケーションには、前記第1のモードでの撮像結果に基づいて、前記視認状態にある視認状態者の属性を認識する処理が含まれる
 (22)に記載の電子機器。
(24)
 前記所定のアプリケーションには、前記属性に対応したコンテンツ再生制御処理及びコンテンツ再生制御に係る設定処理が含まれる
 (23)に記載の電子機器。
(25)
 前記アプリケーションプロセッサは前記検出の結果が前記視認状態である場合に、前記撮像素子と異なるセンサのセンシング結果を参照して、前記所定のアプリケーションを実行する
 (21)又は(22)に記載の電子機器。
(26)
 前記撮像素子と異なる他の撮像素子を有し、
 前記所定のアプリケーションには、前記他の撮像素子から得られる画像に基づいて、障害物を検出する処理が含まれる
 (25)に記載の電子機器。
(27)
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
 アプリケーションプロセッサが、前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行する
 制御方法。
(28)
 撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
 アプリケーションプロセッサが、前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行する
 制御方法をコンピュータに実行させるプログラム。
Note that the present disclosure can also be configured as follows.
(1)
an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode;
and an application processor that executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode. Electronics.
(2)
The electronic device according to (1), wherein the predetermined object includes a part of a human body different from the part of the human body.
(3)
The electronic device according to (1) or (2), wherein the application processor recognizes the part of the human body as an operation instruction for the predetermined object, and executes the application according to the recognition result.
(4)
The electronic device according to any one of (1) to (3), wherein the predetermined object is imaged by the imaging device in the first mode.
(5)
The electronic device according to any one of (1) to (4), wherein the application is an application that registers information related to the predetermined object.
(6)
(5) The electronic device according to (5), wherein the information related to the predetermined object is information related to a usage history of the predetermined object.
(7)
The electronic device according to (5) or (6), wherein the predetermined object is an object held by the part of the human body.
(8)
The electronic device according to (1), wherein the predetermined object is a human body part different from the human body part, and the application processor executes an application corresponding to a combination of the human body parts.
(9)
The electronic device according to (8), wherein the human body part and the different human body part are a right hand part and a left hand part, respectively.
(10)
Execution of an application that corresponds to the part of the human body and the predetermined object and is determined based on at least one of information based on image data obtained by the imaging device and information obtained from a sensor different from the imaging device. The electronic device according to any one of (1) to (9).
(11)
(10) The electronic device according to (10), wherein the information obtained from the sensor includes at least one of information on inclination of the electronic device, information on illuminance, position information, and time information.
(12)
The electronic device according to any one of (1) to (11), wherein a notification requesting an additional operation is made when executing the application.
(13)
having a display;
The electronic device according to (12), wherein the additional operation request is displayed on the display.
(14)
The electronic device according to (12) or (13), wherein the additional motion is motion in the time direction.
(15)
The electronic device according to any one of (12) to (14), wherein the additional operation is an operation for authentication.
(16)
The electronic device according to any one of (1) to (15), wherein the predetermined object is identified by multi-wavelength sensing.
(17)
The power consumption in the second mode becomes smaller than the power consumption in the first mode by making the imaging parameter for the imaging in the second mode smaller than the imaging parameter for the imaging in the first mode. The electronic device according to any one of (1) to (16).
(18)
The electronic device according to (17), wherein the imaging parameters include at least one of resolution, color gradation, imaging area, and frame rate.
(19)
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode; Method.
(20)
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode; A program that makes a computer perform a method.
(21)
an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode, and detects whether or not the imaging device is in a visible state according to imaging results in the second mode; ,
and an application processor that executes a predetermined application when the result of the detection is the visible state.
(22)
(21) The electronic device according to (21), wherein when the result of the detection is the visible state, the operation mode of the imaging element is changed to the first mode.
(23)
(22) The electronic device according to (22), wherein the predetermined application includes a process of recognizing an attribute of the visually recognizing person in the visually recognizing state based on the imaging result in the first mode.
(24)
(23) The electronic device according to (23), wherein the predetermined application includes a content reproduction control process corresponding to the attribute and a setting process related to the content reproduction control.
(25)
The electronic device according to (21) or (22), wherein, when the result of the detection is the visible state, the application processor refers to the sensing result of a sensor different from the imaging element and executes the predetermined application. .
(26)
Having another imaging device different from the imaging device,
(25) The electronic device according to (25), wherein the predetermined application includes a process of detecting an obstacle based on the image obtained from the other imaging device.
(27)
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death,
A control method in which an application processor executes a predetermined application when a result of the detection is the visible state.
(28)
The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death,
A program that causes a computer to execute a control method in which an application processor executes a predetermined application when the result of the detection is the visible state.
12・・・ディスプレイ
100・・・スマートフォン
101・・・制御部
101A・・・アプリケーションプロセッサ
104B・・・撮像素子
139・・・検出部
12... Display 100... Smart phone 101... Control unit 101A... Application processor 104B... Image sensor 139... Detection unit

Claims (28)

  1.  第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行う撮像素子と、
     前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行するアプリケーションプロセッサと
     を有する電子機器。
    an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode;
    and an application processor that executes an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode. Electronics.
  2.  前記所定のオブジェクトは、前記人体の一部と異なる人体の一部を含む
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein the predetermined object includes a part of a human body different from the part of the human body.
  3.  前記アプリケーションプロセッサは、前記人体の一部が前記所定のオブジェクトに対する操作指示と認識し、前記アプリケーションを認識結果に応じて実行する
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein the application processor recognizes the part of the human body as an operation instruction for the predetermined object, and executes the application according to the recognition result.
  4.  前記所定のオブジェクトに対して前記撮像素子による前記第1のモードでの撮像が行われる
     請求項1に記載の電子機器。
    2. The electronic device according to claim 1, wherein an image of the predetermined object is captured by the imaging device in the first mode.
  5.  前記アプリケーションは、前記所定のオブジェクトに関連する情報を登録するアプリケーションである
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein the application is an application that registers information related to the predetermined object.
  6.  前記所定のオブジェクトに関連する情報は、当該所定のオブジェクトの使用履歴に関する情報である
     請求項5に記載の電子機器。
    The electronic device according to claim 5, wherein the information related to the predetermined object is information related to the usage history of the predetermined object.
  7.  前記所定のオブジェクトは、前記人体の一部によって保持されるオブジェクトである
     請求項5に記載の電子機器。
    The electronic device according to claim 5, wherein the predetermined object is an object held by the part of the human body.
  8.  前記所定のオブジェクトは、前記人体の一部とは異なる人体の一部であり、前記アプリケーションプロセッサは、前記人体の一部の組み合わせに対応するアプリケーションを実行する
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein the predetermined object is a human body part different from the human body part, and the application processor executes an application corresponding to a combination of the human body parts.
  9.  前記人体の一部及び前記異なる人体の一部のそれぞれは、右手の一部と左手の一部である
     請求項8に記載の電子機器。
    The electronic device according to claim 8, wherein the human body part and the different human body part are a right hand part and a left hand part, respectively.
  10.  前記人体の一部及び所定のオブジェクトに対応し、且つ、前記撮像素子により得られる画像データに基づく情報及び前記撮像素子とは異なるセンサから得られる情報の少なくとも一方に基づいて決定されたアプリケーションが実行される
     請求項1に記載の電子機器。
    Execution of an application that corresponds to the part of the human body and the predetermined object and is determined based on at least one of information based on image data obtained by the imaging device and information obtained from a sensor different from the imaging device. The electronic device according to claim 1.
  11.  前記センサから得られる情報は、前記電子機器の傾きに関する情報、照度に関する情報、位置情報、及び、時刻情報の少なくとも一つを含む
     請求項10に記載の電子機器。
    The electronic device according to claim 10, wherein the information obtained from the sensor includes at least one of information regarding inclination of the electronic device, information regarding illuminance, position information, and time information.
  12.  前記アプリケーションを実行する際に追加の動作を要求する報知が行われる
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein a notification requesting an additional operation is provided when executing the application.
  13.  ディスプレイを有し、
     前記追加の動作の要求が、前記ディスプレイに表示される
     請求項12に記載の電子機器。
    having a display;
    13. The electronic device of claim 12, wherein the additional action request is displayed on the display.
  14.  前記追加の動作は、時間方向の動きである
     請求項12に記載の電子機器。
    The electronic device according to claim 12, wherein the additional motion is motion in the time direction.
  15.  前記追加の動作が認証を行うための動作である
     請求項12に記載の電子機器。
    The electronic device according to claim 12, wherein the additional operation is an operation for authentication.
  16.  多波長センシングにより前記所定のオブジェクトの識別が行われる
     請求項1に記載の電子機器。
    The electronic device according to claim 1, wherein identification of the predetermined object is performed by multi-wavelength sensing.
  17.  前記第2のモードの撮像に関する撮像パラメータを、前記第1のモードの撮像に関する撮像パラメータよりも小さくすることにより、前記第2のモードの消費電力が前記第1のモードの消費電力よりも小さくなる
     請求項1に記載の電子機器。
    The power consumption in the second mode becomes smaller than the power consumption in the first mode by making the imaging parameter for the imaging in the second mode smaller than the imaging parameter for the imaging in the first mode. The electronic device according to claim 1.
  18.  前記撮像パラメータには、解像度、色の階調、撮像領域、フレームレートの少なくとも一つが含まれる
     請求項17に記載の電子機器。
    The electronic device according to claim 17, wherein the imaging parameters include at least one of resolution, color gradation, imaging area, and frame rate.
  19.  撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
     アプリケーションプロセッサが、前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
     制御方法。
    The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
    an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode; Method.
  20.  撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、
     アプリケーションプロセッサが、前記撮像素子の前記第2のモードでの撮像により、人体の一部及び所定のオブジェクトが検出された場合に、前記人体の一部及び所定のオブジェクトに対応するアプリケーションを実行する
     制御方法をコンピュータに実行させるプログラム。
    The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode,
    an application processor executing an application corresponding to the part of the human body and the predetermined object when the part of the human body and the predetermined object are detected by imaging the imaging element in the second mode; A program that makes a computer perform a method.
  21.  第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出する撮像素子と、
     前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行するアプリケーションプロセッサと
     を有する電子機器。
    an imaging device that performs imaging in a first mode or in a second mode that consumes less power than the first mode, and detects whether or not the imaging device is in a visible state according to imaging results in the second mode; ,
    and an application processor that executes a predetermined application when the result of the detection is the visible state.
  22.  前記検出の結果が前記視認状態である場合に、前記撮像素子の動作モードが前記第1のモードに変更される制御が行われる
     請求項21に記載の電子機器。
    22. The electronic device according to claim 21, wherein control is performed to change the operation mode of the imaging element to the first mode when the result of the detection is the visible state.
  23.  前記所定のアプリケーションには、前記第1のモードでの撮像結果に基づいて、前記視認状態にある視認状態者の属性を認識する処理が含まれる
     請求項22に記載の電子機器。
    23. The electronic device according to claim 22, wherein the predetermined application includes a process of recognizing an attribute of the visually recognizable person in the visually recognizable state based on the imaging result in the first mode.
  24.  前記所定のアプリケーションには、前記属性に対応したコンテンツ再生制御処理及びコンテンツ再生制御に係る設定処理が含まれる
     請求項23に記載の電子機器。
    24. The electronic device according to claim 23, wherein the predetermined application includes content reproduction control processing corresponding to the attribute and setting processing related to content reproduction control.
  25.  前記アプリケーションプロセッサは前記検出の結果が前記視認状態である場合に、前記撮像素子と異なるセンサのセンシング結果を参照して、前記所定のアプリケーションを実行する
     請求項21に記載の電子機器。
    22. The electronic device according to claim 21, wherein, when the result of the detection is the visible state, the application processor refers to the sensing result of a sensor different from the imaging element and executes the predetermined application.
  26.  前記撮像素子と異なる他の撮像素子を有し、
     前記所定のアプリケーションには、前記他の撮像素子から得られる画像に基づいて、障害物を検出する処理が含まれる
     請求項25に記載の電子機器。
    Having another imaging device different from the imaging device,
    26. The electronic device according to claim 25, wherein the predetermined application includes processing for detecting an obstacle based on the image obtained from the other imaging device.
  27.  撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
     アプリケーションプロセッサが、前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行する
     制御方法。
    The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death,
    A control method in which an application processor executes a predetermined application when a result of the detection is the visible state.
  28.  撮像素子が、第1のモード又は当該第1のモードよりも消費電力が小さい第2のモードで撮像を行い、前記第2のモードでの撮像結果に応じて視認状態であるか否かを検出し、
     アプリケーションプロセッサが、前記検出の結果が前記視認状態である場合に、所定のアプリケーションを実行する
     制御方法をコンピュータに実行させるプログラム。
    The imaging device performs imaging in the first mode or in a second mode that consumes less power than the first mode, and detects whether or not it is in a visible state according to the imaging result in the second mode. death,
    A program that causes a computer to execute a control method in which an application processor executes a predetermined application when the result of the detection is the visible state.
PCT/JP2022/044010 2021-12-28 2022-11-29 Electronic apparatus, control method, and program WO2023127377A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021-214581 2021-12-28
JP2021214581 2021-12-28
JP2022-053405 2022-03-29
JP2022053405 2022-03-29

Publications (1)

Publication Number Publication Date
WO2023127377A1 true WO2023127377A1 (en) 2023-07-06

Family

ID=86998897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/044010 WO2023127377A1 (en) 2021-12-28 2022-11-29 Electronic apparatus, control method, and program

Country Status (1)

Country Link
WO (1) WO2023127377A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049017A1 (en) * 2012-08-16 2015-02-19 Amazon Technologies, Inc. Gesture recognition for device input
JP2018026378A (en) * 2016-08-08 2018-02-15 ソニーセミコンダクタソリューションズ株式会社 Solid state imaging device and manufacturing method, and electronic apparatus
EP3789862A1 (en) * 2015-02-26 2021-03-10 Samsung Electronics Co., Ltd. Touch processing method and electronic device for supporting the same
JP2021078965A (en) * 2019-11-21 2021-05-27 マクセルホールディングス株式会社 Medicine dosing management device and medicine dosing management system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049017A1 (en) * 2012-08-16 2015-02-19 Amazon Technologies, Inc. Gesture recognition for device input
EP3789862A1 (en) * 2015-02-26 2021-03-10 Samsung Electronics Co., Ltd. Touch processing method and electronic device for supporting the same
JP2018026378A (en) * 2016-08-08 2018-02-15 ソニーセミコンダクタソリューションズ株式会社 Solid state imaging device and manufacturing method, and electronic apparatus
JP2021078965A (en) * 2019-11-21 2021-05-27 マクセルホールディングス株式会社 Medicine dosing management device and medicine dosing management system

Similar Documents

Publication Publication Date Title
CN109917956B (en) Method for controlling screen display and electronic equipment
KR102350781B1 (en) Mobile terminal and method for controlling the same
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
KR102276023B1 (en) Mobile terminal and movemetn based low power implementing method thereof
CN105894733B (en) Driver&#39;s monitoring system
CN105723310B (en) Method and apparatus for user interactive data storage
CN109840061A (en) The method and electronic equipment that control screen is shown
WO2021013145A1 (en) Quick application starting method and related device
CN110058777A (en) The method and electronic equipment of shortcut function starting
CN107300967B (en) Intelligent navigation method, device, storage medium and terminal
CN111475077A (en) Display control method and electronic equipment
CN110827820B (en) Voice awakening method, device, equipment, computer storage medium and vehicle
CN110341627B (en) Method and device for controlling behavior in vehicle
KR101879334B1 (en) Apparatus for indentifying a proximity object and method for controlling the same
CN113613028B (en) Live broadcast data processing method, device, terminal, server and storage medium
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN109684107B (en) Information reminding method and device
CN112445276A (en) Folding screen display application method and electronic equipment
CN114594923A (en) Control method, device and equipment of vehicle-mounted terminal and storage medium
US10845921B2 (en) Methods and systems for augmenting images in an electronic device
CN113206913A (en) Holding posture detection method and electronic equipment
US11605242B2 (en) Methods and devices for identifying multiple persons within an environment of an electronic device
CN110233914A (en) A kind of terminal device and its control method
WO2023127377A1 (en) Electronic apparatus, control method, and program
CN111061369B (en) Interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22915608

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023570743

Country of ref document: JP

Kind code of ref document: A