WO2020085525A1 - Terminal mobile et son procédé de commande - Google Patents

Terminal mobile et son procédé de commande Download PDF

Info

Publication number
WO2020085525A1
WO2020085525A1 PCT/KR2018/012562 KR2018012562W WO2020085525A1 WO 2020085525 A1 WO2020085525 A1 WO 2020085525A1 KR 2018012562 W KR2018012562 W KR 2018012562W WO 2020085525 A1 WO2020085525 A1 WO 2020085525A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
operation unit
light source
raw data
mobile terminal
Prior art date
Application number
PCT/KR2018/012562
Other languages
English (en)
Korean (ko)
Inventor
송민우
안예한
진동철
김민호
전상국
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2018/012562 priority Critical patent/WO2020085525A1/fr
Priority to KR1020190092154A priority patent/KR102218919B1/ko
Priority to KR1020190092155A priority patent/KR20200045947A/ko
Priority to US16/661,215 priority patent/US11500103B2/en
Priority to US16/661,199 priority patent/US11620044B2/en
Publication of WO2020085525A1 publication Critical patent/WO2020085525A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a control method for a mobile terminal. More specifically, it is applicable to the technical field of driving at least one camera (for example, a depth camera) included in the mobile terminal with low power.
  • at least one camera for example, a depth camera
  • Terminals can be divided into mobile terminals (mobile / portable terminals) and stationary terminals according to their mobility. Again, the mobile terminal may be divided into a handheld terminal and a vehicle mounted terminal according to whether the user can directly carry it.
  • the functions of mobile terminals are diversifying. For example, there are functions for data and voice communication, photo and video shooting through a camera, voice recording, music file playback through a speaker system, and image or video output on the display.
  • an electronic game play function is added or a multimedia player function is performed.
  • recent mobile terminals can receive multicast signals that provide visual content such as broadcast and video or television programs.
  • such a terminal is a multimedia player having multiple functions, such as taking a picture or a video, playing music or a video file, receiving a game, or broadcasting. Is being implemented.
  • the complex functions of the mobile terminal can be performed without the user directly touching the mobile terminal.
  • the user may interact without directly touching the mobile terminal through a voice or vision-based user interface provided in the mobile terminal.
  • the user interface based on the touch input cannot be interacted with when the user is away from the mobile terminal. Also, in the process of touch input, the position of the mobile terminal in the stand state may be changed or may be overturned. However, a voice or vision-based user interface may be advantageous over a touch input-based user interface in that a user can interact while away from the mobile terminal and does not apply any external force to the mobile terminal.
  • the voice-based user interface may be limited by space and time. For example, in a noisy space, it may be difficult for the mobile terminal to discern the user's voice. In addition, the user may be inconvenient to interact with the mobile terminal through a voice at the late night or with another person.
  • the vision-based user interface may provide convenience for the user to interact with the mobile terminal in the limited situation as described above. Accordingly, recently, a technology in which a mobile terminal interacts with a user through a vision-based user interface is actively being researched.
  • the mobile terminal can be equipped with a depth camera.
  • a depth camera In order to recognize the user's movement using the depth camera, there is a problem in that power consumption of the mobile terminal is unnecessary since the depth camera must always be kept in the ON state.
  • the present invention aims to solve the above-mentioned problems and other problems through the specification of the present invention.
  • the present invention intends to propose a solution in which a mobile terminal provides a user interface (UI) based on vision instead of touch input through a depth camera.
  • UI user interface
  • the present invention is to provide a solution to prevent excessive power consumption by varying the mode depending on the condition while the depth camera included in the mobile terminal is always ON.
  • the present invention is to provide a solution for determining the presence or absence of a user with low power and obtaining a depth image through a Time Of Flight (TOF) camera included in a mobile terminal.
  • TOF Time Of Flight
  • a TOF (Time Of Flight) camera operating in a first or second mode receives a light source irradiating a light source to an object, and receives light reflected from the object
  • An image sensor for obtaining raw data an operation unit for processing the raw data, and a controller connected to the light source, the image sensor, and the operation unit, wherein the controller is a light source of a specific signal in the first mode.
  • the light source is controlled to irradiate, and when the change of the raw data is sensed through the operation unit, the device is switched to the second mode, and the light source is controlled to vary the phase of the specific signal in the second mode. , Controlling the operation unit to generate depth data of the object through the raw data Is done.
  • the image sensor includes first and second photogates to which signals having different phases are applied and receive reflected light corresponding thereto
  • the controller controls the image sensor to acquire the raw data through the first photo gate in the first mode, and the image to acquire the raw data through the first and second photo gates in the second mode. It is characterized by controlling the sensor.
  • the operation unit is a sub-operation unit for determining the intensity change of the raw data obtained in the first mode, and the obtained in the second mode It characterized in that it comprises a main operation unit for generating the depth data through the intensity ratio of the raw data.
  • the controller when the controller detects an intensity change of the raw data in the sub-operation unit, it is characterized in that the main operation unit wakes up do.
  • the controller is characterized in that the specific signal is periodically varied to have a phase difference of 0 degrees, 90 degrees, 180 degrees and 270 degrees in the second mode Is done.
  • the controller is characterized in that when the depth data does not change over a preset time in the second mode, the first mode.
  • a mobile terminal for achieving the above object operates in a first or second mode, a light source for irradiating light, an image sensor for receiving reflected light, and obtaining raw data, and A TOF (Time Of Flight) camera including a calculation unit for generating depth data through the raw data, a memory storing a command corresponding to the depth data, a display providing graphic feedback in response to the command stored in the memory,
  • the controller includes a TOF camera, a memory, and a controller connected to the display. The controller controls the light source to irradiate a light source of a specific signal in the first mode, and the raw data is calculated by the operation unit.
  • the image sensor is applied with a signal having a different phase during a preset frame, and correspondingly, first and second photogates receiving the reflected light And in the first mode, the raw data is obtained through the first photo gate, and in the second mode, the raw data is obtained through the first and second photo gates.
  • the operation unit is a sub-operation unit for determining the intensity change of the raw data obtained in the first mode, the raw data obtained in the second mode It characterized in that it comprises a main operation unit for calculating the intensity ratio, and thereby generating the depth data.
  • the controller when the controller detects an intensity change of the raw data in the sub-operation unit, it is characterized in that the main operation unit wakes up do.
  • a method of controlling a mobile terminal including a time of flight (TOF) camera for achieving the above object, by operating the time of flight (TOF) camera in a first mode, moving Checking the existence of the object, and if the existence of the moving object in the first mode is confirmed, operating the TOF camera in a second mode to obtain depth data of the object, wherein the first mode includes The TOF camera irradiates a light source of a specific signal to an object, receives a light source reflected by the object, obtains raw data, detects a change in the raw data, determines the existence of the moving object, The second mode irradiates a light source by varying the phase of the specific signal in the TOF camera, and the light source reflected by the object Receiving data to obtain a row, and is characterized in that for obtaining the depth data of the objects through the raw data.
  • TOF time of flight
  • the first mode in the second mode It characterized in that it comprises a step of switching to.
  • a user may interact with a mobile terminal through a user's movement instead of a touch input.
  • the TOF camera included in the mobile terminal may be operated in different modes, and thus may consume excessive power.
  • the presence or absence of a user may be checked with low power through a TOF camera included in a mobile terminal, and a depth image may be acquired to prevent excessive power consumption.
  • FIG. 1A is a block diagram illustrating a mobile terminal related to the present invention.
  • FIGS. 1B and 1C are conceptual views of an example of a mobile terminal related to the present invention viewed from different directions.
  • 2 to 6 are views for explaining the driving method of the TOF camera.
  • FIG. 7 is a view for explaining the problem of the existing TOF camera 200 and the effect of the TOF camera according to an embodiment of the present invention which improves the problem.
  • FIG. 8 is a schematic view for explaining the configuration of a TOF camera according to an embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a control method for controlling a TOF camera according to an embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating a method of determining the presence or absence of an object by comparing frames in FIG. 9.
  • FIG. 11 is a view for explaining FIG. 9.
  • FIG. 12 is a schematic diagram for explaining the configuration of a TOF camera according to another embodiment of the present invention.
  • FIG. 13 is a view for explaining a method and a problem of recognizing a user's finger through a depth image acquired by a conventional depth camera.
  • FIG. 14 is a diagram for schematically explaining a method in which a mobile terminal according to an embodiment of the present invention recognizes a point targeted by a user's hand through a depth camera.
  • FIG. 15 is a flow chart for explaining FIG. 14.
  • FIG. 16 is a view for explaining an interaction region in FIG. 15.
  • FIG. 17 is a diagram for describing a depth image acquired in the interaction area of FIG. 16.
  • FIG. 18 is a flowchart illustrating a method of implementing a segment tree corresponding to a user's hand in FIG. 15.
  • 19 to 22 are views for explaining FIG. 18.
  • FIG. 23 is a flowchart illustrating a method of determining an effective end through a segment tree in FIG. 15.
  • FIG. 24 is a view for explaining FIG. 23.
  • 25 is a flowchart illustrating a method of extracting a targeting point and depth corresponding to the effective end determined in FIG. 15.
  • 26 and 27 are views for explaining FIG. 25.
  • 28 to 30 are diagrams for explaining video feedback provided corresponding to the targeting point extracted through FIG. 15.
  • Mobile terminals described herein include mobile phones, smart phones, laptop computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), navigation, and slate PCs.
  • Tablet PC tablet PC
  • ultrabook ultrabook
  • wearable device wearable device, for example, a watch-type terminal (smartwatch), glass-type terminal (smart glass), HMD (head mounted display), etc. may be included have.
  • the configuration according to the embodiment described in the present specification may be applied to a fixed terminal such as a digital TV, a desktop computer, and a digital signage, except when applicable only to a mobile terminal. It is easy for a technician to know.
  • FIG. 1A is a block diagram illustrating a mobile terminal related to the present invention.
  • FIGS. 1B and 1C are conceptual views of an example of a mobile terminal related to the present invention viewed from different directions.
  • the mobile terminal 100 includes a wireless communication unit 110, an input unit 120, a sensing unit 140, an output unit 150, an interface unit 160, a memory 170, a control unit 180, and a power supply unit 190 ) And the like.
  • the components shown in FIG. 1A are not essential for implementing the mobile terminal 100, so the mobile terminal 100 described herein may have more or fewer components than those listed above. have.
  • the wireless communication unit 110 among the components, between the mobile terminal 100 and the wireless communication system, between the mobile terminal 100 and another mobile terminal 100, or the mobile terminal 100 and an external server It may include one or more modules that enable wireless communication between. Also, the wireless communication unit 110 may include one or more modules connecting the mobile terminal 100 to one or more networks.
  • the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless Internet module 113, a short-range communication module 114, and a location information module 115. .
  • the input unit 120 may include a camera 121 for inputting a video signal or a video input unit, a microphone for inputting an audio signal (microphone 122), or an audio input unit, a user input unit 123 for receiving information from a user, for example , A touch key, a push key, and the like.
  • the voice data or image data collected by the input unit 120 may be analyzed and processed by a user's control command.
  • the sensing unit 140 may include one or more sensors for sensing at least one of information in the mobile terminal 100, surrounding environment information surrounding the mobile terminal 100, and user information.
  • the sensing unit 140 includes a proximity sensor 141, an illumination sensor 142, a touch sensor, an acceleration sensor, a magnetic sensor, and gravity G-sensor, gyroscope sensor, motion sensor, RGB sensor, infrared sensor (IR sensor), fingerprint scan sensor, ultrasonic sensor , Optical sensor (e.g., camera (see 121)), microphone (refer to 122), battery gauge, environmental sensor (e.g.
  • the mobile terminal 100 disclosed in this specification may combine and use information sensed by at least two or more of these sensors.
  • the output unit 150 is for generating output related to vision, hearing, or tactile sense, and includes at least one of a display unit 151, an audio output unit 152, a haptic module 153, and an optical output unit 154 can do.
  • the display unit 151 may form a mutual layer structure with the touch sensor or may be integrally formed, thereby realizing a touch screen.
  • the touch screen may function as a user input unit 123 that provides an input interface between the mobile terminal 100 and the user, and at the same time, provide an output interface between the mobile terminal 100 and the user.
  • the interface unit 160 serves as a passage with various types of external devices connected to the mobile terminal 100.
  • the interface unit 160 connects a device equipped with a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, and an identification module. It may include at least one of a port, an audio input / output (I / O) port, a video input / output (I / O) port, and an earphone port.
  • I / O audio input / output
  • I / O video input / output
  • earphone port an earphone port
  • the memory 170 stores data supporting various functions of the mobile terminal 100.
  • the memory 170 may store a number of application programs (application programs) that are driven in the mobile terminal 100, data for operation of the mobile terminal 100, and instructions. At least some of these applications may be downloaded from external servers via wireless communication. In addition, at least some of these application programs may exist on the mobile terminal 100 from the time of shipment for basic functions of the mobile terminal 100 (for example, an incoming call, an outgoing function, a message reception, and an outgoing function). Meanwhile, the application program may be stored in the memory 170 and installed on the mobile terminal 100 to be driven by the controller 180 to perform an operation (or function) of the mobile terminal 100.
  • the controller 180 controls the overall operation of the mobile terminal 100 in addition to the operations related to the application program.
  • the controller 180 may provide or process appropriate information or functions to the user by processing signals, data, information, etc. input or output through the above-described components or by driving an application program stored in the memory 170.
  • controller 180 may control at least some of the components discussed with reference to FIG. 1A to drive an application program stored in the memory 170. Furthermore, the controller 180 may operate by combining at least two or more of the components included in the mobile terminal 100 for driving the application program.
  • the power supply unit 190 receives external power and internal power to supply power to each component included in the mobile terminal 100.
  • the power supply unit 190 includes a battery, and the battery may be a built-in battery or a replaceable battery.
  • At least some of the components may operate in cooperation with each other to implement an operation, control, or control method of the mobile terminal 100 according to various embodiments described below. Also, the operation, control, or control method of the mobile terminal 100 may be implemented on the mobile terminal 100 by driving at least one application program stored in the memory 170.
  • the input unit 120 is for input of image information (or signals), audio information (or signals), data, or information input from a user.
  • the mobile terminal 100 may include one or more A camera 121 may be provided.
  • the camera 121 may be part of the mobile terminal 100 of the present invention, or may be configured to include the mobile terminal 100. That is, the camera 121 and the mobile terminal 100 of the present invention may include at least some common features or configurations.
  • the camera 121 processes image frames such as still images or moving images obtained by an image sensor in a video call mode or a shooting mode.
  • the processed image frame may be displayed on the display unit 151 or stored in the memory 170.
  • the plurality of cameras 121 provided in the mobile terminal 100 may be arranged to form a matrix structure, and through the camera 121 forming the matrix structure, various angles or focuses may be applied to the mobile terminal 100.
  • a plurality of image information may be input.
  • the plurality of cameras 121 may be arranged in a stereo structure to acquire left and right images for realizing a stereoscopic image.
  • the sensing unit 140 senses at least one of information in the mobile terminal 100, surrounding environment information surrounding the mobile terminal 100, and user information, and generates a sensing signal corresponding thereto.
  • the controller 180 may control driving or operation of the mobile terminal 100 based on the sensing signal, or perform data processing, function, or operation related to an application program installed in the mobile terminal 100. Representative sensors among various sensors that may be included in the sensing unit 140 will be described in more detail.
  • the proximity sensor 141 refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or an object in the vicinity using mechanical force or infrared rays, etc., without mechanical contact.
  • the proximity sensor 141 may be provided with the proximity sensor 141 in the inner region of the mobile terminal 100 wrapped by the touch screen described above or in the vicinity of the touch screen.
  • the proximity sensor 141 examples include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive type proximity sensor, a magnetic type proximity sensor, and an infrared proximity sensor.
  • the proximity sensor 141 may be configured to detect the proximity of the object by a change in electric field according to the proximity of the conductive object. In this case, the touch screen (or touch sensor) itself may be classified as a proximity sensor.
  • the proximity sensor 141 may detect a proximity touch and a proximity touch pattern (eg, proximity touch distance, proximity touch direction, proximity touch speed, proximity touch time, proximity touch position, proximity touch movement state, etc.). have.
  • the controller 180 processes data (or information) corresponding to the proximity touch operation and the proximity touch pattern detected through the proximity sensor 141, and further, visual information corresponding to the processed data. It can be output on the touch screen. Furthermore, the controller 180 may control the mobile terminal 100 such that different operations or data (or information) are processed according to whether a touch on the same point on the touch screen is a proximity touch or a touch touch. .
  • the touch sensor uses a touch (or touch input) applied to the touch screen (or the display unit 151) using at least one of various touch methods such as a resistive film method, a capacitive method, an infrared method, an ultrasonic method, and a magnetic field method. Detect.
  • the touch sensor may be configured to convert a change in pressure applied to a specific part of the touch screen or capacitance generated in the specific part into an electrical input signal.
  • the touch sensor may be configured to detect a position, an area, a pressure when a touch is touched, a capacitance when a touch object is applied on the touch screen, and the like.
  • the touch object is an object that applies a touch to the touch sensor, and may be, for example, a finger, a touch pen or a stylus pen, a pointer, and the like.
  • the touch controller processes the signal (s) and then transmits corresponding data to the controller 180. Accordingly, the control unit 180 can know which area of the display unit 151 has been touched, and the like.
  • the touch controller may be a separate component from the controller 180, or may be the controller 180 itself.
  • the controller 180 may perform different controls or perform the same control according to the type of the touch object that touches the touch screen (or a touch key provided in addition to the touch screen). Whether to perform different control or the same control according to the type of the touch object, it may be determined according to an operating state of the current mobile terminal 100 or an application program being executed.
  • touch sensors and proximity sensors described above are independently or in combination, such as short (or tap) touch on the touch screen, long touch, multi touch, drag touch ), Flick touch, pinch-in touch, pinch-out touch, swipe touch, hovering touch, etc. You can sense the touch.
  • the ultrasonic sensor may recognize location information of a sensing target using ultrasonic waves.
  • the control unit 180 may calculate the position of the wave generating source through information sensed by the optical sensor and the plurality of ultrasonic sensors.
  • the position of the wave generator can be calculated by using a property in which light is much faster than ultrasound, that is, a time when light reaches the optical sensor is much faster than a time when the ultrasound reaches the ultrasonic sensor. More specifically, the position of the wave generating source may be calculated by using a time difference from the time at which ultrasonic waves reach the light as a reference signal.
  • the camera 121 includes at least one of a camera sensor (eg, CCD, CMOS, etc.), a photo sensor (or image sensor), and a laser sensor.
  • a camera sensor eg, CCD, CMOS, etc.
  • a photo sensor or image sensor
  • a laser sensor e.g., a laser sensor
  • the camera 121 and the laser sensor may be combined with each other to detect a touch of a sensing target for a 3D stereoscopic image.
  • the photo sensor may be stacked on the display element, which is configured to scan the movement of the sensing object close to the touch screen. More specifically, the photo sensor mounts photo diodes and TRs (transistors) in rows / columns to scan the contents loaded on the photo sensor using electrical signals that change according to the amount of light applied to the photo diode. That is, the photo sensor performs coordinate calculation of the sensing object according to the change amount of light, and through this, location information of the sensing object may be obtained.
  • the disclosed mobile terminal 100 has a bar-shaped terminal body.
  • the present invention is not limited to this, and may be applied to various structures such as a watch type, a clip type, a glass type, or a folder type, a flip type, a slide type, a swing type, a swivel type to which two or more bodies are movably coupled.
  • a watch type a clip type, a glass type, or a folder type
  • a flip type a slide type
  • a swing type a swivel type to which two or more bodies are movably coupled.
  • the terminal body may be understood as a concept of referring to the mobile terminal 100 as at least one aggregate.
  • the mobile terminal 100 includes a case (for example, a frame, a housing, a cover, etc.) forming an exterior. As illustrated, the mobile terminal 100 may include a front case 101 and a rear case 102. Various electronic components are disposed in the inner space formed by the combination of the front case 101 and the rear case 102. At least one middle case may be additionally disposed between the front case 101 and the rear case 102.
  • a case for example, a frame, a housing, a cover, etc.
  • the mobile terminal 100 may include a front case 101 and a rear case 102.
  • Various electronic components are disposed in the inner space formed by the combination of the front case 101 and the rear case 102.
  • At least one middle case may be additionally disposed between the front case 101 and the rear case 102.
  • the display unit 151 is disposed on the front of the terminal body to output information. As illustrated, the window 151a of the display unit 151 is mounted on the front case 101 to form the front surface of the terminal body together with the front case 101.
  • electronic components may also be mounted on the rear case 102.
  • Electronic components that can be mounted on the rear case 102 include a removable battery, an identification module, and a memory card.
  • the rear case 102 may be detachably coupled to the rear cover 103 for covering the mounted electronic components. Therefore, when the rear cover 103 is separated from the rear case 102, the electronic components mounted on the rear case 102 are exposed to the outside.
  • the rear cover 103 when the rear cover 103 is coupled to the rear case 102, a part of the side surface of the rear case 102 may be exposed. In some cases, the rear case 102 may be completely covered by the rear cover 103 when the coupling is performed. Meanwhile, an opening for exposing the camera 121b or the sound output unit 152b to the outside may be provided in the rear cover 103.
  • These cases (101, 102, 103) may be formed by injection of synthetic resin or may be formed of metal, for example, stainless steel (STS), aluminum (Al), titanium (Ti), or the like.
  • STS stainless steel
  • Al aluminum
  • Ti titanium
  • the mobile terminal 100 may be configured such that one case provides the inner space, unlike the above example in which a plurality of cases provide an inner space accommodating various electronic components.
  • the mobile terminal 100 of a unibody in which synthetic resin or metal extends from the side to the rear may be implemented.
  • the mobile terminal 100 may be provided with a waterproof portion (not shown) to prevent water from entering the terminal body.
  • the waterproof portion is provided between the window 151a and the front case 101, between the front case 101 and the rear case 102, or between the rear case 102 and the rear cover 103, the combination thereof It may include a waterproof member for sealing the interior space of the city.
  • the mobile terminal 100 includes a display unit 151, first and second sound output units 152a and 152b, a proximity sensor 141, an illuminance sensor 142, a light output unit 154, the first and second units Cameras 121a and 121b, first and second operation units 123a and 123b, a microphone 122, and an interface unit 160 may be provided.
  • the mobile terminal 100 in which the second sound output unit 152b and the second camera 121b are disposed on the rear surface of the mobile terminal 100 will be described as an example.
  • first operation unit 123a may not be provided on the front of the terminal body, and the second sound output unit 152b may be provided on the side of the terminal body rather than the rear of the terminal body.
  • the display unit 151 displays (outputs) information processed by the mobile terminal 100.
  • the display unit 151 may display execution screen information of an application program driven by the mobile terminal 100, or UI (User Interface) or GUI (Graphic User Interface) information according to the execution screen information. .
  • the display unit 151 includes a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display (flexible). display), a three-dimensional display (3D display), and an electronic ink display (e-ink display).
  • LCD liquid crystal display
  • TFT LCD thin film transistor-liquid crystal display
  • OLED organic light-emitting diode
  • flexible display flexible display
  • display flexible display
  • display flexible display
  • display flexible display
  • display flexible display
  • display flexible display
  • display three-dimensional display
  • e-ink display an electronic ink display
  • two or more display units 151 may be present depending on the implementation form of the mobile terminal 100.
  • a plurality of display units may be spaced apart from one surface or integrally disposed in the mobile terminal 100, or may be respectively disposed on different surfaces.
  • the display unit 151 may include a touch sensor that senses a touch on the display unit 151 so that a control command can be input by a touch method. Using this, when a touch is made to the display unit 151, the touch sensor detects the touch, and the controller 180 can be configured to generate a control command corresponding to the touch based on the touch.
  • the content input by the touch method may be a letter or a number, or an instruction or designable menu item in various modes.
  • the touch sensor is composed of a film having a touch pattern, is disposed between the display (not shown) on the back surface of the window 151a and the window 151a, or a metal wire that is directly patterned on the back surface of the window 151a. It can be.
  • the touch sensor may be formed integrally with the display.
  • the touch sensor may be disposed on the substrate of the display, or may be provided inside the display.
  • the display unit 151 may form a touch screen together with a touch sensor, and in this case, the touch screen may function as a user input unit 123 (see FIG. 1A). In some cases, the touch screen may replace at least some functions of the first operation unit 123a.
  • the first sound output unit 152a may be implemented as a receiver that delivers a call sound to the user's ear, and the second sound output unit 152b is a loud speaker that outputs various alarm sounds or multimedia playback sounds. ).
  • An acoustic hole for emitting sound generated from the first sound output unit 152a may be formed in the window 151a of the display unit 151.
  • the present invention is not limited thereto, and the sound may be configured to be emitted along an assembly gap between structures (for example, a gap between the window 151a and the front case 101). In this case, the appearance of the mobile terminal 100 may be simpler because the holes formed independently for the sound output are not visible or hidden.
  • the light output unit 154 is configured to output light to notify when an event occurs. Examples of the event include message reception, call signal reception, missed calls, alarm, schedule notification, email reception, information reception through an application, and the like.
  • the control unit 180 may control the light output unit 154 to end the output of light when the user's event is detected.
  • the first camera 121a processes image frames of still images or moving pictures obtained by an image sensor in a shooting mode or a video call mode.
  • the processed image frame may be displayed on the display unit 151, and may be stored in the memory 170.
  • the first and second operation units 123a and 123b are examples of the user input unit 123 that is operated to receive a command for controlling the operation of the mobile terminal 100, and may also be collectively referred to as a manipulating portion. have.
  • the first and second operation units 123a and 123b may be adopted in any manner as long as the user operates the device while receiving a tactile feeling, such as touch, push, scroll.
  • the first and second manipulation units 123a and 123b may be employed in such a way that the user operates without a tactile feeling through a proximity touch, a hovering touch, or the like.
  • the first operation unit 123a is illustrated as a touch key, but the present invention is not limited thereto.
  • the first operation unit 123a may be a mechanical key or a combination of a touch key and a push key.
  • Contents input by the first and second operation units 123a and 123b may be variously set.
  • the first operation unit 123a receives commands such as a menu, home key, cancel, search, etc.
  • the second operation unit 123b is output from the first or second sound output units 152a, 152b. Commands such as adjusting the volume of the sound and switching to the touch recognition mode of the display unit 151 may be received.
  • a rear input unit (not shown) may be provided.
  • the rear input unit is operated to receive a command for controlling the operation of the mobile terminal 100, and the input content may be variously set. For example, commands such as power on / off, start, end, scroll, etc., control the volume of sound output from the first and second sound output units 152a, 152b, and touch recognition mode of the display unit 151. You can receive commands such as conversion of.
  • the rear input unit may be implemented in a form capable of input by touch input, push input, or a combination thereof.
  • the rear input unit may be disposed to overlap the front display unit 151 in the thickness direction of the terminal body.
  • the rear input unit may be arranged at the upper rear portion of the terminal body so that it can be easily operated using the index finger.
  • the present invention is not necessarily limited to this, and the position of the rear input unit may be changed.
  • the rear input unit When the rear input unit is provided on the rear of the terminal body, a new type of user interface using the rear input unit may be implemented.
  • the above-described touch screen or rear input unit replaces at least some functions of the first operation unit 123a provided on the front surface of the terminal body, when the first operation unit 123a is not disposed on the front surface of the terminal body,
  • the display unit 151 may be configured as a larger screen.
  • the mobile terminal 100 may be provided with a fingerprint recognition sensor for recognizing a user's fingerprint, and the controller 180 may use fingerprint information detected through the fingerprint recognition sensor as an authentication means.
  • the fingerprint recognition sensor may be embedded in the display unit 151 or the user input unit 123.
  • the microphone 122 is configured to receive a user's voice, other sounds, and the like.
  • the microphone 122 may be provided at a plurality of locations and configured to receive stereo sound.
  • the interface unit 160 is a passage through which the mobile terminal 100 can be connected to an external device.
  • the interface unit 160 is a connection terminal for connection with other devices (eg, earphones, external speakers), a port for short-range communication (eg, an infrared port (IrDA Port), a Bluetooth port (Bluetooth) Port, Wireless LAN Port, etc.], or at least one of a power supply terminal for supplying power to the mobile terminal 100.
  • the interface unit 160 may be implemented in the form of a socket that accommodates an external card such as a subscriber identification module (SIM) or a user identity module (UIM) or a memory card for storing information.
  • SIM subscriber identification module
  • UIM user identity module
  • a second camera 121b may be disposed on the rear side of the terminal body.
  • the second camera 121b has a shooting direction substantially opposite to the first camera 121a.
  • the second camera 121b may include a plurality of lenses arranged along at least one line.
  • the plurality of lenses may be arranged in a matrix format.
  • Such a camera may be referred to as an 'array camera'.
  • the second camera 121b is configured as an array camera, images may be captured in a variety of ways using a plurality of lenses, and better quality images may be obtained.
  • the flash 124 may be disposed adjacent to the second camera 121b. When the flash 124 photographs the subject with the second camera 121b, the light is directed toward the subject.
  • a second sound output unit 152b may be additionally disposed on the terminal body.
  • the second sound output unit 152b may implement a stereo function together with the first sound output unit 152a, or may be used to implement a speakerphone mode during a call.
  • the terminal body may be provided with at least one antenna for wireless communication.
  • the antenna may be built in the terminal body or may be formed in the case.
  • an antenna forming part of the broadcast receiving module 111 may be configured to be pulled out from the terminal body.
  • the antenna may be formed of a film type and attached to the inner surface of the rear cover 103, or a case including a conductive material may be configured to function as an antenna.
  • the terminal body is provided with a power supply unit 190 (see FIG. 1A) for supplying power to the mobile terminal 100.
  • the power supply unit 190 may include a battery 191 built in the terminal body or configured to be detachable from the outside of the terminal body.
  • the battery 191 may be configured to receive power through a power cable connected to the interface unit 160. Also, the battery 191 may be configured to be wirelessly charged through a wireless charger.
  • the wireless charging may be implemented by a magnetic induction method or a resonance method (magnetic resonance method).
  • the rear cover 103 is coupled to the rear case 102 so as to cover the battery 191 to limit the detachment of the battery 191, and is configured to protect the battery 191 from external impact and foreign matter.
  • the rear cover 103 may be detachably coupled to the rear case 102.
  • FIGS. 2 to 5 are views for explaining a driving method of the TOF camera 200.
  • FIG. 2 is a basic configuration diagram of the TOF camera 200
  • FIG. 3 is a diagram specifically showing the image sensor 220 in the TOF camera 200
  • 4 and 5 are diagrams illustrating a method of acquiring depth data by receiving light from the image sensor 220
  • FIG. 6 is a light source 210 for removing ambient light and offset.
  • the present invention relates to a mobile terminal that provides a vision-based user interface using a depth camera.
  • Depth cameras can be classified into three types according to a method of extracting a depth value.
  • the first is a method of obtaining the distance of each pixel (Pixel) through the time that the light irradiated by the depth camera is reflected on the object and returned by the Time Of Flight (TOF) method related to the present invention.
  • the second is a stereo method in which a depth camera obtains the distance of each pixel using binocular parallax.
  • the structured pattern method is a method in which the depth camera irradiates patterned light to an object and obtains the distance of each pixel through the degree of distortion of the pattern.
  • the TOF camera 200 may include a light source 210 that irradiates light toward the object 400 and an image sensor 220 that receives light reflected by the object 400. have.
  • the TOF camera 200 may include a main processor 230a internally or externally to calculate the depth of the object 400 using raw data obtained through the image sensor 220.
  • the light source 210 may irradiate the modularized light 211 through a timing generator (not shown).
  • the light source 210 may irradiate a light source of a specific signal through a timing generator, and in some cases, periodically change the phase of a specific signal and irradiate the light source.
  • the image sensor 220 may include at least one cell 221 having two photogates 221a and 221b.
  • the two photo gates 221a and 221b may respectively receive the light 212 reflected by the object to obtain raw data corresponding thereto.
  • the method in which the image sensor 220 acquires raw data of the light 212 received through the two photo gates 221a and 221b is as follows.
  • the photo gates 221a and 221b initialize the voltage of the cell by applying the reset signal Rst (step 1), and apply the modulation signals DMIX0 and DMIX1 signals to accumulate the received light ( , ) (Step 2), and apply the address decode signal (Address Decode) to , ) Reading the charged amount of charge (step 3).
  • the modulation signal DMIX0 applied to the first photogate 221a may have a phase opposite to the modulation signal DMIX1 applied to the second photogate 221b, and the modulation applied to the first photogate 211a
  • One of the signal DMIX0 and the modulation signal DMIX1 applied to the second photo gate 221b may have the same phase as the light 211 emitted from the light source 210.
  • Figure 4 (a) is a light 211 irradiated from the light source 210 4 (b) is reflected by the object 400 and light 212 received from the image sensor 220 and irradiated from the light source 210 with light 211 As much as it has a phase difference.
  • 4 (c) and 4 (b) show the modulation signal DMIX0 applied to the first photo gate 221a and the modulation signal DMIX1 applied to the second photo gate 221b.
  • the modulation signal DMIX1 applied to the second photo gate 221b has the same phase as the light 211 emitted from the light source 210 and the modulation signal DMIX0 applied to the first photo gate 221a. And may have an opposite phase.
  • the light 212 received from the image sensor 220 is the light irradiated from the light source 210. Since it has a phase difference as much as, the capacitor of the first photo gate 221a through the modulation signals DMIX0, DMIX1 ( ), The amount of charge removed from Node-A (Node-A) and the capacitor of the second photo gate 221b ( ), The amount of charge removed from Node-B is as shown in FIGS. 4 (c) and 4 (b).
  • the image sensor 220 is a capacitor of the first photogate 221a and the second photogate 221b ( , ),
  • FIG. 5 (b) shows a case where light 211 (optical) irradiated from the light source 210 and light 212 (electrical) reflected by the object 400 have a phase difference of 90 degrees.
  • the capacitor ( ) The amount of charge charged and the capacitor ( ) May have the same amount of charge.
  • FIG. 5 (c) shows a case where light 211 (optical) irradiated from the light source 210 and light 212 (electrical) reflected by the object 400 have a phase difference of 180 degrees.
  • the capacitor ( ) Can be charged.
  • the light source 210 irradiates a light source of a specific signal
  • the image sensor 220 receives light corresponding thereto.
  • the image sensor 220 may receive ambient light in addition to the light 212 emitted from the light source 210. Therefore, in order to eliminate errors caused by ambient light, the light source 210 may periodically change the phase of a specific signal to irradiate the light source.
  • FIG. 6 illustrates an embodiment in which a phase of a specific signal is periodically varied in the light source 210 to irradiate a light source to eliminate errors caused by ambient light.
  • FIG. 6 (a) shows a light source having a phase difference of 0 degrees, that is, a light source irradiated with a single preset pulse
  • FIG. 6 (b) shows a light source having a phase difference of 90 degrees from the light source of FIG. 6 (a).
  • 6 (c) shows a light source having a phase difference of 180 degrees from the light source of FIG. 6 (a)
  • FIG. 6 (d) shows a light source having a phase difference of 270 degrees from the light source of FIG. 6 (a).
  • FIG. 6 (e) shows light reflected by the object 400 and received by the image sensor 220.
  • the modulation signal DMIX1 applied to the second photo gate 221b may be synchronized with the signals of FIGS. 6 (a) to 6 (d), wherein FIG. 6 (e) is synchronized with the signal, respectively.
  • 221b) capacitor ( ) Shows charge amounts 213a to 213b.
  • the light source 210 periodically varies the phase of a specific signal, and the capacitor of the image sensor 220 ( , ), The error caused by ambient light can be corrected through the difference in the percentage charged.
  • FIG. 7 is a view for explaining the problem of the existing TOF camera 200 and the effect of the TOF camera according to an embodiment of the present invention which improves the problem.
  • FIG. 7 (a) shows a state in which the TOF camera 200 always maintains an ON state and acquires a depth image by irradiating light to the tracking area 300 even when the user 400 is not present.
  • the TOF camera 200 always maintains an ON state and acquires a depth image by irradiating light to the tracking area 300 even when the user 400 is not present.
  • power consumption of the mobile terminal is unnecessary.
  • obtaining a depth image is technically effective in preventing excessive power consumption only when the user 400 is present in the tracking area 300.
  • the TOF camera 200 and the mobile terminal including the same are driven with a low power and operate in a first mode to confirm the existence of the user 400, and the user 400 in the first mode When the existence is confirmed, it may operate in a second mode for acquiring a depth image. In this case, it is possible to prevent excessive power consumption even if the TOF camera 200 is always in an ON state.
  • FIG. 8 is a schematic diagram for explaining the configuration of the TOF camera 200 according to an embodiment of the present invention.
  • the TOF camera operates in a first or second mode, and receives light reflected from the illumination source 210 and the object 400 irradiating the light source to the object 400
  • the operation unit 230a switches to the second mode, controls the light source 210 to vary the phase of the specific signal in the second mode, and generates depth data of the object 400 through the raw data.
  • 230b) can be controlled.
  • each component module is illustrated separately, it is also within the scope of the present invention to design some modules by combining them.
  • the controller may include a timing generator that provides a pulse signal to the light source 210 and the image sensor 220, and in some cases, may be configured to include arithmetic units 230a and 230b. Also, the controller may have a configuration corresponding to the controller 180 shown in FIG. 1A.
  • the TOF camera 200 controls the light source 210 that irradiates a light source of a specific signal in a second mode in order to correct errors in ambient light or offset in the depth data, as described in FIG. 6.
  • the phase of a specific signal can be changed periodically.
  • the light source 400 is controlled to irradiate a light source of a specific signal without a phase change, and power consumed for the phase change can be reduced.
  • the image sensor 220 applies first and second photo gates 221a and 221b that receive signals having different phases during a predetermined frame and receive reflected light 212 correspondingly. It can contain.
  • the first and second photo gates 221a and 221b constitute one cell 221, and the image sensor 220 may include a plurality of cells 221.
  • the controller may control the image sensor 220 to acquire the raw data through the first photo gate 221a in the first mode. That is, the controller applies the modulation signal DMIX0 (refer to FIG. 4) only to the first photo gate 221a and does not apply the modulation signal DMIX1 to the second photo gate 221a.
  • the controller uses only one photo gate in the cell 211 including the first and second photo gates 221a and 221b in the first mode to drive both the first and second photo gates 221a and 221b. Power consumption can be reduced.
  • the mode for checking the existence of the moving object 400 does not need to calculate the depth data of the object 400.
  • capacitors of the first and second grape gates 221a and 221b ( , ) Is required.
  • Capacitor of one photogate refers to a change in the intensity of the reflected light, through which the existence of the moving object 400 can be confirmed. That is, in the first mode, power consumed to drive both the first and second photo gates 221a and 221b may be reduced by driving only one photo gate.
  • the TOF camera 200 of the present invention includes a sub-operation unit 230b for determining a change in intensity of raw data obtained by the operation units 230a and 230b through the first photo gate 221a and the first And a main operation unit 230a that generates (calculates) the depth data of the object through the intensity ratio through the raw data obtained from the second photo gates 221a and 221b, respectively.
  • the main operation unit 230a is the same as the main operation unit 230a included in the existing TOF camera 200 of FIG. 2 and may correspond to a Mobile Industry Processor Interface (MIPI).
  • MIPI Mobile Industry Processor Interface
  • the main operation unit 230a generates depth data through raw data acquired in the second mode.
  • the main operation unit 230a may have a lot of power consumption since the calculation process is complicated and processes a lot of data.
  • the sub-operation unit 230a is an operation unit that determines only the change in the intensity of raw data acquired through one photo gate, and may correspond to a Serial Peripheral Interface (SPI) with simple processing data and low power consumption.
  • SPI Serial Peripheral Interface
  • the TOF camera 200 consumes excessive power even if it is always in an ON state. Can be prevented.
  • the controller may wake up the main operation unit 230a. That is, when the main operation unit 230a maintains a state in which the power is not applied in the first mode and then switches to the second mode, power may be applied.
  • FIG. 9 is a flowchart illustrating a control method for controlling a TOF camera according to an embodiment of the present invention.
  • a method of controlling a TOF camera according to an embodiment of the present invention in a first mode and a second mode is as follows.
  • the TOF camera can irradiate a light source of a specific signal.
  • the specific signal may be a periodic single pulse signal.
  • the periodic single pulse light source may be a light source that maintains only the pulse corresponding to FIG. 6 (a) as a pulse that does not periodically change phase in order to correct errors due to ambient light and offset. .
  • the image sensor may acquire raw data by receiving light reflected by the object.
  • the raw data may include intensity data that can measure the intensity change.
  • the image sensor may include the first and second photogates 211a and 211b as described in FIG. 3, and in the first mode, intensity data may be acquired using only one photogate. have. That is, in the first mode, intensity data corresponding to the amount of charge charged in the capacitor of one photogate can be obtained.
  • the TOF camera may include a sub-operation unit 230b as described in FIG. 8, and may transmit the intensity data acquired in one photo gate to the sub-operation unit 230b.
  • the sub-operation unit 230b may determine whether a moving object exists by determining whether a difference between the N-1th intensity data and the N-th intensity data acquired at predetermined intervals is greater than or equal to the preset intensity.
  • the TOF camera according to the present invention is the first mode state when the difference between the N-1st intensity data and the N-th intensity data obtained at a preset period interval in the first mode is equal to or less than the preset intensity (S205, NO). Can keep.
  • the TOF camera changes the phase of a specific signal in the second mode and can irradiate a light source to the object.
  • a specific signal may be irradiated with light sources in phases of 0 degrees, 90 degrees, 180 degrees, and 270 degrees (4 phases).
  • the generating of depth data and providing a user UX / UI corresponding to the mobile terminal may be a step of providing graphic feedback through a display in response to the depth data.
  • the mobile terminal includes a memory that stores commands corresponding to the depth data, and the display can provide graphic feedback corresponding to the commands stored in the memory.
  • the TOF camera may be switched from the second mode to the first mode (S211).
  • switching from the second mode to the first mode may be possible when there is no change in depth data over a preset time. That is, the mode can be switched through whether the depth data acquired in the second mode is changed regardless of whether the UX / UI provided by the mobile terminal is terminated.
  • FIG. 10 is a flow chart for explaining a method of determining the presence or absence of an object by comparing frames in FIG. 9, and FIG. 11 is a view for explaining FIG. 9.
  • the method of determining the existence of the moving object in the first mode is as follows.
  • the raw data obtained through the first or second photo gate is intensity data, and may be measured at predetermined time intervals for each pixel 212a.
  • the sub-operation unit 230b determines whether the difference between the N-1th intensity data and the N-th intensity data acquired at predetermined intervals is greater than or equal to the preset intensity (determining the presence or absence of an object ( S205) may include the following steps.
  • the image sensor 220 may include cells 221 including the first photogate 211a and the second photogate 211b to form a plurality of rows and columns.
  • the plurality of cells 221 may correspond to the pixel 212a shown in FIG. 11, and the intensity data (the intensity data obtained from the first or second photo gate) obtained by each of the plurality of cells 221 may be pixels. It may correspond to the size of the value indicated in (212a).
  • the sub-operation unit 230b calculates the sum of the pixel values for each column / row, and (S301) calculates the difference between the N-1 frame and the N frames obtained at predetermined time intervals (S301).
  • 11 (a) shows an N-1 frame
  • 11 (b) shows an N frame
  • the step of calculating the difference between the N-1 frame and the N frames obtained at predetermined time intervals is based on the sum 212b of the pixel values included in each column as shown in FIGS. 11 (a) and (b). And calculating a difference between a sum 212b of pixel values included in each column of the -1 frame and the N frame.
  • the difference between the sum of the pixel values included in each column of the N-1 frame and the N frame 212b is 1 + 3 + 14 + 1 + 12, which is 30.
  • the step of calculating the difference between the N-1 frame and the N frames obtained at predetermined time intervals is based on the sum 212c of the pixel values included in each row as shown in FIGS. 11 (a) and (b). And calculating a difference between sums 212c of pixel values included in each row of the N-1 frame and the N frame.
  • the difference between the sum of the pixel values included in each row of the N-1 frame and the N frame 212c is 1 + 4 + 12 + 11, which is 28.
  • FIG. 12 is a schematic diagram for explaining the configuration of the TOF camera 200 according to another embodiment of the present invention.
  • the TOF camera 200 includes a separate photodiode 260 and receives the light source 212 irradiated and reflected by the light source 210 in the first mode through the separate photodiode 260 can do.
  • the main operation unit 230a can be wakeed up.
  • the TOF camera 200 illustrated in FIG. 12 separately includes a configuration for receiving light in the first mode, and a feature for receiving light in the second mode to the image sensor 220.
  • FIG. 13 is a view for explaining a method and a problem of recognizing a user's finger through a depth image acquired by a conventional depth camera.
  • an image of a portion corresponding to the user's hand 410 is quickly and accurately extracted from the entire depth image, and the finger 414 can be accurately and quickly tracked. It should be possible.
  • the method of tracking the shortest distance point through the depth camera has a problem that is not suitable for tracking a plurality of fingers 414.
  • FIG. 14 is a diagram for schematically illustrating a method in which a mobile terminal according to an embodiment of the present invention recognizes a point targeted by a user's hand 410 through a depth camera 200.
  • the mobile terminal In order to interact with the user, the mobile terminal needs to quickly and accurately track a finger targeted at the user's hand 410 and points 411a and 411b targeted by the finger. Further, even if the mobile terminal protrudes from the user's hand 410 toward the depth camera, it is necessary to exclude the targeting point 412 from tracking.
  • FIG. 14 (a) shows an embodiment in which the user's hand 410 is directed toward the depth camera 200
  • FIG. 14 (b) shows that the depth camera 200 in FIG. 14 (a) is the user's hand 410. Is captured, and is a segment 500 obtained corresponding to a distance h from the depth camera 200 in the depth image.
  • FIG. 14 (c) shows the segment tree 600 implemented through the inclusion relationship between the corresponding reference distance and the segment 500 of FIG. 14 (b).
  • the mobile terminal may extract targeting points 411a and 411b from the depth image of the user's hand 410 through the segment tree 600.
  • the segment tree 600 may include at least one node 630.
  • the at least one node 630 may be a specific segment including a plurality of segments corresponding to neighboring distances. That is, when the segment corresponding to the N + 1st reference distance (h_N + 1) includes a plurality of segments corresponding to the Nth reference distance (h_N), the segment corresponding to the N + 1st reference distance (h_N + 1)
  • the segment is a specific segment, and may correspond to the first node 630a in the segment tree 600.
  • the first effective end 610a is connected to the first node 630a and corresponds to the first point 411a targeted by the user's hand because the first node 630a and the distance ha1 are greater than a preset distance. You can.
  • the second effective end 610b is connected to the second node 630b, and the second node 630b and the second point each of which is targeted at the user's hand 410 with a distance ha2 greater than or equal to a preset distance ( 411b).
  • the end is classified as an invalid end 620, which is a point 412 that is not targeted at the user's hand 410 ).
  • the invalid end 620 is connected to the second node 630b, the second node 630b and the distance hb1 is less than a predetermined distance, a point 412 that is not targeted at the user's hand 410 Can correspond to
  • the present invention has the advantage of being able to track fingers with different node depths, such as the thumb, with targeting fingers, and to accurately track targeting points of multiple fingers.
  • FIG. 15 is a flow chart for explaining FIG. 14.
  • the mobile terminal acquires a depth image through a depth camera to track a user's finger. (S401)
  • the mobile terminal sets an interaction region and removes an image outside the interaction region from the depth image.
  • the interaction area may be set within a first preset distance from the mobile terminal, and will be described in detail with reference to FIG. 16 below.
  • the mobile terminal may extract the target distance of the object through a pixel having a distance value closest to the depth image.
  • the target distance may be a starting reference distance for acquiring a segment according to the distance from the mobile terminal in the depth image, and will be specifically described with reference to FIG. 17 below.
  • the depth camera may acquire a segment corresponding to the distance from the depth camera in the depth image, and implement a segment tree through the segmentation relationship.
  • the segment tree may be implemented through an inclusion relationship between segments obtained according to a distance from the mobile terminal, and will be described in detail with reference to FIGS. 18 to 22.
  • the mobile terminal determines the effective end through the distance from the node in the implemented segment tree.
  • the mobile terminal may identify a specific segment corresponding to the node in the segment tree, and determine an effective end through a segment located at a distance greater than or equal to a second preset distance from the distance at which the specific segment is obtained. In this regard, it will be described in detail with reference to FIGS. 23 and 24 below.
  • the mobile terminal may extract the location of at least one target point corresponding to the effective end, and provide video feedback through the display in response to the extracted location.
  • the method of extracting the location of the target point will be described with reference to FIGS. 26 and 27, and the method of providing video feedback through the display will be described in detail with reference to FIGS. 28 to 30.
  • the mobile terminal may store a memory storing at least one command, a depth camera capturing an object to obtain a depth image, and at least one command stored in the memory, A display for outputting video feedback corresponding to an object captured by the depth camera, and a controller connected to the memory, the depth camera, and the display, wherein the controller is configured according to at least one command stored in the memory.
  • the depth camera controls to track the object, and according to at least one command stored in the memory, the distance from the mobile terminal in the depth image To Acquiring at least one segment according to, and controlling the display to output at least one target point located above a second preset distance to the mobile terminal from a distance at which a specific segment is acquired according to at least one command stored in the memory It can be characterized by doing.
  • FIG. 16 is a diagram for explaining an interaction region in FIG. 15, and FIG. 17 is a diagram for describing a depth image obtained in the interaction region of FIG. 16.
  • the depth camera 200 may extract depth data for each pixel, and through this, it is possible to extract an object within a specific distance.
  • the interaction area 320 may be an area corresponding to a minimum recognition distance 310 and a first preset distance 330 that the depth camera 200 can recognize the object.
  • the mobile terminal can track only objects that have entered the interaction area 320. Since the user reaches out to the mobile terminal and interacts with the mobile terminal through hand movement, the object entering the interaction area 320 may be the user's hand, and the mobile terminal tracking the object that is likely to be the user's hand desirable.
  • the mobile terminal removes the pixel value of the remaining pixels except for the pixel having the pixel value corresponding to the interaction region 320 in the acquired depth image.
  • a pixel 321 having a value of 0 is a pixel having a pixel value that does not correspond to the interaction area 320, and corresponds to a pixel from which a pixel value is removed.
  • the pixel 322 having a pixel value may be a pixel having a pixel value corresponding to the interaction region 320. That is, the pixel 322 having a pixel value may be a pixel having a pixel value corresponding to the first reference distance 330 or less.
  • the mobile terminal is designed to start tracking when an object enters the interaction area 320. That is, when an object does not enter the interaction area 320 and all pixel values are removed as 0, the mobile terminal does not perform tracking described below, thereby increasing energy and data efficiency.
  • the mobile terminal determines the shortest distance from the acquired depth image.
  • the shortest distance may be a distance corresponding to a minimum pixel value excluding 0 in FIG. 17.
  • the determined shortest distance may be used to form a segment tree of the object by entering the object into the interaction area 320.
  • a method of forming a segment tree will be described.
  • FIG. 18 is a flowchart for describing a method of implementing a segment tree corresponding to a user's hand in FIG. 15, and FIGS. 19 to 22 are diagrams for describing FIG. 18.
  • the mobile terminal may implement a segment tree when starting tracking when an object enters the interaction area 620 of FIG. 16.
  • FIG. 21 is a diagram illustrating a process of dividing a segment.
  • FIG. 21 (b) shows segments segmented by grouping pixels having pixel values in the filtered image.
  • FIG. 22 is a diagram illustrating a method of implementing a segment tree through the segmented segment of FIG. 21 and determining a specific segment.
  • segment A has a corresponding reference distance longer than a corresponding reference distance
  • the occupied area includes segments B, C, and D as shown in the upper view of FIG. 22 (b).
  • the reference distances corresponding to segments B, C, and D are longer than the reference distances corresponding to segments E, F, and are included in the areas occupied by segments B and C, respectively, as shown in the lower diagram of FIG. 22 (b).
  • the upper and lower views of FIG. 22 (b) may be implemented as a segment tree as shown in FIG. 22 (c).
  • segments A, segments B, C, D, and segments E, F are located above, middle, and bottom through corresponding reference distances, and segment A indicates segments B, C, and D. Including, it can be connected to segments B, C, and D.
  • segments B and C include segments E and F, respectively, so that each can be interconnected.
  • the mobile terminal according to the present invention can reduce the reference distance from the depth image at a predetermined interval from the first preset distance to obtain a corresponding segment, and form a segment tree through the intersegment inclusion relationship.
  • FIG. 23 is a flowchart illustrating a method of determining an effective end through a segment tree in FIG. 15, and FIG. 24 is a diagram for describing FIG. 23.
  • the mobile terminal When the mobile terminal implements a segment tree through depth data, it determines the end of the segment tree. (S601)
  • the end may correspond to the target point of the present invention, in this regard will be described in detail with reference to Figures 24 and 25.
  • the mobile terminal extracts a segment corresponding to the reference distance, and if the extracted segment does not include another segment, it may determine that segment as the end of the segment tree.
  • the segment tree 600 is an example of the segment tree 600, and may be implemented as described through FIGS. 18 to 22.
  • the segment tree 600 according to an example included in FIG. 23 includes three ends 610b and 610a. , 620).
  • the length between each end and the node is extracted.
  • the length between each end and the node may be a difference between a reference distance corresponding to each end and a reference distance corresponding to the node.
  • the node may correspond to the characteristic segment of FIG. 22.
  • the segment tree 600 according to the example of FIG. 23 includes one node 630, and three ends 610a, 610b, and 620 are implemented by connecting the nodes 630.
  • the length between each end (610a, 610b, 620) and the node 630 is a reference distance (End b, End c) corresponding to each end (610a, 610b, 620) and a reference distance (End) corresponding to the node (630) It may be the difference of a).
  • the distance h4 between the first end 610a and the second end 610b and the node 630 is a reference distance End b and a node corresponding to the first end 610a and the second end 610b. It may be a difference of the reference distance End a corresponding to 630.
  • the distance h3 between the third end 620 and the node 630 is the difference between the reference distance End c corresponding to the third end 620 and the reference distance End a corresponding to the node 630.
  • the mobile terminal determines an effective end based on whether the length between each end and the node is greater than or equal to the second preset distance THRES (S603). That is, the mobile terminal sets a second preset value at a reference distance corresponding to the node. An end obtained at a reference distance spaced over a distance may be determined as an effective end.
  • the method of additionally checking whether the invalid end is through metadata may be determined through a size increase rate from the effective end to the node, and if the effective end suddenly increases from the node, it may be determined as the invalid end.
  • FIG. 25 is a flow chart for explaining a method of extracting a targeting point and depth corresponding to the effective end determined in FIG. 15, and FIGS. 26 and 27 are views for explaining FIG. 25.
  • the mobile terminal extracts the midpoint (x, y) of the segment corresponding to the effective end. (S701) The mobile terminal extracts the midpoint (x, y) of the segment through the position of the pixels constituting the segment corresponding to the effective end. can do.
  • the mobile terminal extracts the depth (Depth (z)) of the effective end. (S702), and ends the procedure (S703).
  • FIG. 26 is an example of the segment tree 600, and may be implemented as described through FIGS. 18 to 22. Segments E and G may be segments corresponding to effective ends.
  • FIG. 27 is a depth image obtained at a reference distance End e corresponding to segments E and F of FIG. 26 and includes segments E and F through filtering.
  • the mobile terminal may extract the midpoint (x, y) of the segment E through the position of the pixel group 610e constituting the segment E in the segment E corresponding to the effective end.
  • the mobile terminal may extract the depth h6 through the reference distance End e corresponding to segment E.
  • the depth difference h7 between the segment G and the segment may be extracted through the depth h5 extracted through the reference distance End g corresponding to the segment G.
  • the mobile terminal may acquire a targeting point by the user's hand through the midpoint (x, y) and depth (y) corresponding to the effective end, and provide video feedback to the display correspondingly.
  • the video feedback provided in response to the targeting point will be described below.
  • 28 to 30 are diagrams for explaining video feedback provided corresponding to the targeting point extracted through FIG. 15.
  • the video feedback may be video feedback indicating that a display area corresponding to the at least one target point is selected when at least one target point moves over a third preset distance toward the mobile terminal (or depth camera). .
  • the video feedback may be video feedback indicating that a display area corresponding to the at least one target point is selected when at least one target point moves within a fourth preset distance from the mobile terminal (or depth camera). have.
  • FIG. 28 (a) is a diagram illustrating an embodiment in which a mobile terminal acquires a targeting point through a segment tree corresponding to a user's hand 410 and displays the target point 720 on the display 700.
  • FIG. 28 (b) is a diagram illustrating an embodiment in which the degree to which the targeting point approaches the display is displayed through the graphic feedback 740 approaching the selection target 730.
  • 28 (c) is a diagram illustrating a graphic feedback 750 indicating that the selection target 730 is selected in FIG. 28 (b). That is, the user can know that the selection target is selected because the graphic feedback 750 matches the contour of the selection target 730.
  • the video feedback may be video feedback that enlarges or zooms in the content displayed on the display.
  • the video feedback may be video feedback that reduces or zooms out the size of content displayed on the display when the target points are plural and the plurality of target points are combined into one target point.
  • FIG. 29 (a) shows a user's hand 410 action that provides zoom-in or zoom-out video feedback, where two target points 720a, 720b are merged into one target point 721, or vice versa.
  • One target point value 721 illustrates an embodiment in which two target points 720a and 720b are divided.
  • FIG. 29 (a) shows a user's hand 410 action that provides zoom-in or zoom-out video feedback, wherein five target points 720a, 720b, 720c, 720d, and 720e are one target point 721 ), Or, conversely, one target point 721 shows an embodiment in which the target points are divided into five target points 720a, 720b, 720c, 720d, and 720e.
  • the video feedback may be video feedback that moves content displayed on a display or displays other content in response to a direction in which the at least one target point moves.
  • FIG. 30 illustrates an action of a user hand 410 corresponding to video feedback displaying other content, and an embodiment in which five target points 720a, 720b, 720c, 720d, and 720e move from side to side. have.
  • the video feedback may be video feedback that rotates content displayed on a display corresponding to the rotation direction of the at least two target points.

Abstract

La présente invention a pour objet une caméra de temps de vol (TOF) fonctionnant en un premier ou en un second mode de façon à réduire une consommation d'énergie. À cet effet, une caméra de temps de vol (TOF) d'après la présente invention est caractérisée en ce qu'elle comprend : une source de lumière conçue pour émettre une source de lumière dirigée sur un objet ; un capteur d'image conçu pour recevoir la lumière réfléchie par l'objet et pour obtenir des données d'intensité ; une unité d'exploitation conçue pour traiter les données d'intensité ; et un dispositif de commande raccordé à la source de lumière, au capteur d'image et à l'unité d'exploitation. Le dispositif de commande commande la source de lumière de façon à émettre une source de lumière à impulsion périodique unique dans le premier mode, commute dans le second mode lorsque l'unité d'exploitation détecte une modification des données d'intensité, commande la source de lumière de façon à modifier périodiquement la phase de l'impulsion unique dans le second mode et commande l'unité d'exploitation de façon à générer des données de profondeur relatives à l'objet par l'intermédiaire des données d'intensité.
PCT/KR2018/012562 2018-10-23 2018-10-23 Terminal mobile et son procédé de commande WO2020085525A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/KR2018/012562 WO2020085525A1 (fr) 2018-10-23 2018-10-23 Terminal mobile et son procédé de commande
KR1020190092154A KR102218919B1 (ko) 2018-10-23 2019-07-30 이동 단말기
KR1020190092155A KR20200045947A (ko) 2018-10-23 2019-07-30 이동 단말기
US16/661,215 US11500103B2 (en) 2018-10-23 2019-10-23 Mobile terminal
US16/661,199 US11620044B2 (en) 2018-10-23 2019-10-23 Mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2018/012562 WO2020085525A1 (fr) 2018-10-23 2018-10-23 Terminal mobile et son procédé de commande

Publications (1)

Publication Number Publication Date
WO2020085525A1 true WO2020085525A1 (fr) 2020-04-30

Family

ID=70331164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/012562 WO2020085525A1 (fr) 2018-10-23 2018-10-23 Terminal mobile et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2020085525A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562582B2 (en) * 2018-12-03 2023-01-24 Ams International Ag Three dimensional imaging with intensity information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130100524A (ko) * 2012-03-02 2013-09-11 삼성전자주식회사 3차원 이미지 센서의 구동 방법
KR20130138225A (ko) * 2010-09-28 2013-12-18 마이크로소프트 코포레이션 통합형 저전력 깊이 카메라 및 비디오 프로젝터 장치
US20170244922A1 (en) * 2012-11-28 2017-08-24 Infineon Technologies Ag Charge conservation in pixels
US20170272651A1 (en) * 2016-03-16 2017-09-21 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging
US20170324891A1 (en) * 2012-11-21 2017-11-09 Infineon Technologies Ag Dynamic conservation of imaging power

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130138225A (ko) * 2010-09-28 2013-12-18 마이크로소프트 코포레이션 통합형 저전력 깊이 카메라 및 비디오 프로젝터 장치
KR20130100524A (ko) * 2012-03-02 2013-09-11 삼성전자주식회사 3차원 이미지 센서의 구동 방법
US20170324891A1 (en) * 2012-11-21 2017-11-09 Infineon Technologies Ag Dynamic conservation of imaging power
US20170244922A1 (en) * 2012-11-28 2017-08-24 Infineon Technologies Ag Charge conservation in pixels
US20170272651A1 (en) * 2016-03-16 2017-09-21 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562582B2 (en) * 2018-12-03 2023-01-24 Ams International Ag Three dimensional imaging with intensity information

Similar Documents

Publication Publication Date Title
WO2016035921A1 (fr) Terminal mobile et son procédé de commande
WO2018110749A1 (fr) Terminal mobile
WO2017082508A1 (fr) Terminal de type montre, et procédé de commande associé
WO2016052814A1 (fr) Terminal mobile, et son procédé de commande
WO2017051975A1 (fr) Terminal mobile et son procédé de commande
WO2016064096A2 (fr) Terminal mobile et son procédé de commande
WO2015137580A1 (fr) Terminal mobile
WO2012144667A1 (fr) Procédé et dispositif électronique de reconnaissance de gestes
WO2019168238A1 (fr) Terminal mobile et son procédé de commande
WO2018043844A1 (fr) Terminal mobile
WO2016167406A1 (fr) Dispositif pouvant être porté, système de synchronisation comprenant un dispositif d'affichage et procédé de commande
WO2015163544A1 (fr) Terminal mobile
WO2016175424A1 (fr) Terminal mobile, et procédé de commande associé
WO2015194694A1 (fr) Terminal mobile
WO2015126012A1 (fr) Terminal mobile et son procédé de commande
WO2019231042A1 (fr) Dispositif d'authentification biométrique
WO2018038288A1 (fr) Terminal mobile
WO2016013768A1 (fr) Terminal mobile et son procédé de commande
WO2020105757A1 (fr) Terminal mobile
WO2020059925A1 (fr) Terminal mobile
WO2017159931A1 (fr) Dispositif électronique comprenant un écran tactile et procédé de commande du dispositif électronique
WO2018052159A1 (fr) Terminal mobile et son procédé de commande
WO2016195197A1 (fr) Terminal à stylet et procédé de commande associé
WO2016111406A1 (fr) Terminal mobile et son procédé de commande
WO2016035920A1 (fr) Terminal mobile et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18938207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18938207

Country of ref document: EP

Kind code of ref document: A1