WO2022019427A1 - Procédé et dispositif mobile pour commander un écran d'un dispositif mobile sur la base de la position du couvercle rabattable - Google Patents

Procédé et dispositif mobile pour commander un écran d'un dispositif mobile sur la base de la position du couvercle rabattable Download PDF

Info

Publication number
WO2022019427A1
WO2022019427A1 PCT/KR2021/002201 KR2021002201W WO2022019427A1 WO 2022019427 A1 WO2022019427 A1 WO 2022019427A1 KR 2021002201 W KR2021002201 W KR 2021002201W WO 2022019427 A1 WO2022019427 A1 WO 2022019427A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
data
flip
cover
probability
Prior art date
Application number
PCT/KR2021/002201
Other languages
English (en)
Inventor
Choice CHOUDHARY
Sunil Rathour
Ankit Agarwal
Harshit Oberoi
Desh Deepak AGARWAL
Aditya Singh
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022019427A1 publication Critical patent/WO2022019427A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/724092Interfacing with an external cover providing additional functionalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1675Miscellaneous details related to the relative movement between the different enclosures or enclosure parts
    • G06F1/1677Miscellaneous details related to the relative movement between the different enclosures or enclosure parts for detecting open or closed state or particular intermediate positions assumed by movable parts of the enclosure, e.g. detection of display lid position with respect to main body in a laptop, detection of opening of the cover of battery compartment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3215Monitoring of peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/3888Arrangements for carrying or protecting transceivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/0206Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings
    • H04M1/0241Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings using relative motion of the body parts to change the operational status of the telephone set, e.g. switching on/off, answering incoming call
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1633Protecting arrangement for the entire housing of the computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1634Integrated protective display lid, e.g. for touch-sensitive display in handheld computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/0206Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings
    • H04M1/0208Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings characterized by the relative motions of the body parts
    • H04M1/0214Foldable telephones, i.e. with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to user interface, and more specifically related to a method and mobile device for controlling a screen of the mobile device based on a position of the flip-cover connected to the mobile device.
  • a flip-cover (10) is usually available for purchase as an optional accessory for a mobile device (100) (Referring to FIG. 1).
  • the mobile device (100) usually contains a Hall sensor and usually has a magnet in the corresponding flip-cover (10).
  • the Hall sensor can sense a magnetic field from the magnet when the flip-cover (10) is closed (Referring to notation "a” of FIG. 1) so that the mobile device (100) can automatically enter a suspension mode.
  • the flip-cover (10) Referring to notation "b” of FIG. 1
  • the Hall sensor since the Hall sensor can no longer sense the magnetic field, the mobile device (100) may automatically exit the suspend mode.
  • the Hall sensor has extra costs and occupies extra space in the mobile device (100).
  • the principal object of the embodiments herein is to provide a method for controlling a screen of a mobile device based on a position of the flip-cover connected to the mobile device.
  • Another object of the embodiment herein is to obtain data from a plurality of sensors deployed in the mobile device.
  • Another object of the embodiment herein is to determine the position of the flip-cover based on the data received from the plurality of sensors deployed in the mobile device using a machine learning model, where the status of the position of the flip-cover is one of an open position and a close position.
  • Another object of the embodiment herein is to automatically switch ON the screen of the mobile device in response to determining that the flip-cover is in the open position.
  • Another object of the embodiment herein is to automatically switch OFF the screen of the mobile device in response to determining that the flip-cover is in the close position.
  • embodiments herein disclose a method for controlling a screen of a mobile device based on a position of the flip-cover connected to the mobile device.
  • the method includes obtaining, by the mobile device, data from a plurality of sensors deployed in the mobile device. Further, the method includes determining, by the mobile device, the position of the flip-cover based on the data received from the plurality of sensors deployed in the mobile device using a machine learning model, where the status of the position of the flip-cover is one of an open position and a close position. Further, the method includes automatically switching ON the screen of the mobile device in response to determining that the flip- cover is in the open position. Further, the method includes automatically switching OFF the screen of the mobile device in response to determining that the flip-cover is in the close position.
  • the method includes obtaining, by the mobile device, at least one of mutual hover data and mutual touch data from at least one first sensor from the plurality of sensors deployed in the mobile device. Further, the method includes obtaining, by the mobile device, at least one of magnetometer data, proximity of a surface of the flip-cover to the screen of the mobile device, a lux variant from at least one second sensor from the plurality of sensors deployed in the mobile device. Further, the method includes applying, by the mobile device, the machine learning model on at least one of mutual hover data and mutual touch data to obtain a probability of the at least one of mutual hover data and mutual touch data.
  • the method includes applying, by the mobile device, the machine learning model on at least one of the magnetometer data, the proximity of the surface of the flip-cover to the screen of the mobile device, the lux variant to obtain a probability of the at least one of the magnetometer data, the proximity of the surface of the flip-cover to the screen of the mobile device, the lux variant. Further, the method includes combing, by the mobile device, the obtained probability based on the machine learning model. Further, the method includes detecting, by the mobile device, the position of the flip-cover based on the combined probability.
  • detecting, by the mobile device, the position of the flip-cover includes determining, by the mobile device, whether the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability. Further, the method includes performing either determining whether the lux data does meet the lux data probability in response to determining that the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability or detecting the flip-cover in the close position in response to determining that the mutual data does not meet the mutual data probability and the magnetometer data does not meet the magnetometer probability.
  • determining, by the mobile device, whether the lux data does meet the lux data probability in response to determining that the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability includes performing either determining whether the proximity of the surface of the flip-cover is not zero in response to determining that the lux data does meet the lux data probability or detecting the flip-cover in the close position in response to determining that the lux data does not meet the lux data probability.
  • determining, by the mobile device, whether the proximity of the surface of the flip-cover is not zero in response to determining that the lux data does meet the lux data probability includes performing either detecting the flip-cover in the open position in response to determining that the proximity of the surface of the flip-cover is not zero or detecting the flip-cover in the close position in response to determining that the proximity of the surface of the flip-cover is zero.
  • the machine learning model is training using data sets related collected in different conditions comprising at least one of variable lighting condition with the mobile device in hand of a user of the mobile device and the flip-cover in the open position, variable lighting condition with the mobile device in hand of the user of the mobile device and the flip-cover in the close position, and variable position of the mobile device in variable lighting condition.
  • the plurality of sensors of the mobile device comprises a magnetometer sensor, a lux sensor, a proximity sensor, a gyroscope sensor, and a magnetic sensor.
  • the at least one of mutual hover data and mutual touch data applied to a Convolutional Neural Network (CNN) classifier.
  • CNN Convolutional Neural Network
  • the at least one of the magnetometer data, the proximity of the surface of the flip-cover to the screen of the mobile device, the lux variant applied to a rule-based classifier is not limited to:
  • the embodiments herein provide the mobile device for controlling the screen of the mobile device based on the position of the flip-cover connected to the mobile device.
  • the mobile device includes a screen controller with a processor and a memory.
  • the screen controller is configured to obtain data from the plurality of sensors deployed in the mobile device. Further, the screen controller is configured to determine the position of the flip-cover based on the data received from the plurality of sensors deployed in the mobile device using a machine learning model, where the status of the position of the flip-cover is one of the open position and close position. Further, the screen controller is configured to automatically switch ON the screen of the mobile device in response to determining that the flip-cover is in the open position. Further, the screen controller is configured to automatically switch OFF the screen of the mobile device in response to determining that the flip-cover is in the close position.
  • extra cost and space for the Hall sensor can be saved by using the existing hardware and sensors of the mobile device to predict flip-cover action.
  • FIG. 1 illustrates an existing method to detect a flip-cover position of a mobile device, according to a prior art disclosed herein;
  • FIG. 2a illustrates a block diagram of the mobile device for controlling the screen of the mobile device based on the position of the flip-cover connected to the mobile device, according to an embodiment as disclosed herein;
  • FIG. 2b illustrates a block diagram of a screen controller for controlling the screen of the mobile device based on the position of the flip-cover connected to the mobile device, according to an embodiment as disclosed herein;
  • FIG. 3 is a flow diagram illustrating various operations for controlling the screen of the mobile device based on the position of the flip-cover connected to the mobile device, according to an embodiment as disclosed herein;
  • FIG. 4 is a flow diagram illustrating various operations for detecting the position of the flip-cover, according to an embodiment as disclosed herein;
  • FIG. 5a is a flow diagram illustrating various operations for training machine learning model to determine the position of the flip-cover based on the data received from the plurality of sensors deployed in the mobile device, according to an embodiment as disclosed herein;
  • FIG. 5b is a flow diagram illustrating various operations for data preparation to determine the position of the flip-cover, according to an embodiment as disclosed herein;
  • FIG. 5c illustrates an example of data preprocessing to determine the position of the flip-cover, according to an embodiment as disclosed herein;
  • FIG. 6a-6b illustrates a mutual data model for detecting the position of the flip-cover, according to an embodiment as disclosed herein;
  • FIG. 7a-7c illustrates various operations for a sensor data model for detecting the position of the flip-cover, according to an embodiment as disclosed herein;
  • FIG. 8 example illustrating a secure lock using flip-cover and determining magnetic field variation for various gestures of the user of the mobile device, according to an embodiment as disclosed herein.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention.
  • embodiments herein disclose a method for controlling a screen of a mobile device based on a position of the flip-cover connected to the mobile device.
  • the method includes obtaining, by the mobile device, data from a plurality of sensors deployed in the mobile device. Further, the method includes determining, by the mobile device, the position of the flip-cover based on the data received from the plurality of sensors deployed in the mobile device using a machine learning model, where the status of the position of the flip-cover is one of the open position and the close position. Further, the method includes automatically switching ON the screen of the mobile device in response to determining that the flip-cover is in the open position. Further, the method includes automatically switching OFF the screen of the mobile device in response to determining that the flip-cover is in the close position.
  • FIGS. 2a through 8 there are shown preferred embodiments.
  • FIG. 2a illustrates a block diagram of a mobile device (100) for controlling a screen of the mobile device (100) based on the position of a flip-cover (10) connected to the mobile device (100), according to an embodiment as disclosed herein.
  • the mobile device (100) can be, for example, but not limited to a smartphone, a smart tablet or a like.
  • the mobile device (100) includes a memory (110), a processor (120), a communicator (130), a display (140) (i.e. a screen), a sensor (150), and a screen controller (160).
  • the memory (110) also stores instructions to be executed by the processor (120).
  • the memory (110) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • the memory (110) may, in some examples, be considered a non-transitory storage medium.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (110) is non-movable.
  • the memory (110) can be configured to store larger amounts of information than the memory.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • the memory (110) can be an internal storage unit or it can be an external storage unit of the mobile device (100), a cloud storage, or any other type of external storage.
  • the processor (120) communicates with the memory (110), the communicator (130), the display (140), the sensor (150), and the screen controller (160).
  • the processor (120) is configured to execute instructions stored in the memory (110) and to perform various processes.
  • the communicator (130) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
  • the senor (150) includes a plurality of sensor (150a) to sensor (150n).
  • Examples for the sensor (150) are, but not limited to a magnetometer sensor, a lux sensor, a proximity sensor, a gyroscope sensor, and a magnetic sensor, an audio sensor, a vibration sensor (vibration due to the walking of a user), a distance sensor, a gyro sensor, an indoor navigation sensor, a motion sensor, an infrared sensor, an ultrasonic sensor, an Ambient Light Sensor, an Ambient Temperature Sensor, an Air Humidity Sensor, a Finger Print Sensor, etc.
  • the screen controller (160) is configured to obtain data from the plurality of sensors (150) deployed in the mobile device (100). Further, the screen controller (160) is configured to determine the position of the flip-cover (10) based on the data received from the plurality of sensors (150) deployed in the mobile device (100) using a machine learning model (160b) (e.g. mutual data model (160ba), sensor data model (160bb), weighted sum model (160bc)), where the status of the position of the flip-cover (10) is one of the open position and close position. Further, the screen controller (160) is configured to automatically switch ON the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the open position. Further, the screen controller (160) is configured to automatically switch OFF the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the close position.
  • a machine learning model e.g. mutual data model (160ba), sensor data model (160bb), weighted sum model (
  • the screen controller (160) is configured to obtain at least one of mutual hover data and mutual touch data from at least one first sensor (150a) from the plurality of sensors (150) deployed in the mobile device (100). Further, the screen controller (160) is configured to obtain at least one of the magnetometer data, the proximity of a surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant from at least one second sensor (150b) from the plurality of sensors (150) deployed in the mobile device (100). Further, the screen controller (160) is configured to apply the machine learning model (160b) on at least one of mutual hover data and mutual touch data to obtain the probability of the at least one of mutual hover data and mutual touch data.
  • the screen controller (160) is configured to apply the machine learning model (160b) on at least one of the magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant to obtain the probability of the at least one of magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant. Further, the screen controller (160) is configured to combine the obtained probability based on the machine learning model (160b). Further, the screen controller (160) is configured to detect the position of the flip-cover (10) based on the combined probability.
  • the screen controller (160) is configured to determine whether the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability. Further, the screen controller (160) is configured to determine whether the lux data does meet the lux data probability in response to determining that the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability. Further, the screen controller (160) is configured to detect the flip-cover (10) in the close position in response to determining that the mutual data does not meet the mutual data probability and the magnetometer data does not meet the magnetometer probability.
  • the screen controller (160) is configured to determine whether the proximity of the surface of the flip-cover (10) is not zero in response to determining that the lux data does meet the lux data probability. Further, the screen controller (160) is configured to detect the flip-cover (10) in the close position in response to determining that the lux data does not meet the lux data probability.
  • the screen controller (160) is configured to detect the flip-cover (10) in the open position in response to determining that the proximity of the surface of the flip-cover (10) is not zero. Further, the screen controller (160) is configured to detect the flip-cover (10) in the close position in response to determining that the proximity of the surface of the flip-cover (10) is zero.
  • the machine learning model (160b) is training using data sets related collected in different conditions comprising at least one of variable lighting condition (e.g. a good lighting condition, a medium lighting condition, a low lighting condition) with the mobile device (100) in hand of a user of the mobile device (100) and the flip-cover (10) in the open position, variable lighting condition with the mobile device (100) in hand of the user of the mobile device (100) and the flip-cover (10) in the close position, and variable position (e.g. a face-up, a face-down, in a pocket, in a purse, while playing games) of the mobile device (100) in variable lighting condition and variable the flip-cover (10) condition (e.g. on back side of the mobile device (100), on front side of the mobile device (100)).
  • variable lighting condition e.g. a good lighting condition, a medium lighting condition, a low lighting condition
  • variable lighting condition e.g. a good lighting condition, a medium lighting condition, a low lighting condition
  • the at least one of mutual hover data and mutual touch data applied to a Convolutional Neural Network (CNN) classifier.
  • CNN Convolutional Neural Network
  • the at least one of magnetometer data, proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant applied to a rule-based classifier is not limited to, proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant applied to a rule-based classifier.
  • At least one of the plurality of modules may be implemented through an AI model.
  • a function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor (120).
  • the processor (120) may include one or a plurality of processors.
  • one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
  • CPU central processing unit
  • AP application processor
  • GPU graphics-only processing unit
  • VPU visual processing unit
  • NPU neural processing unit
  • the one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory.
  • the predefined operating rule or artificial intelligence model is provided through training or learning.
  • the learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
  • the AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights.
  • neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
  • CNN convolutional neural network
  • DNN deep neural network
  • RNN recurrent neural network
  • RBM restricted Boltzmann Machine
  • DNN deep belief network
  • BNN bidirectional recurrent deep neural network
  • GAN generative adversarial networks
  • the learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction.
  • Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • FIG. 2a shows various hardware components of the mobile device (100) but it is to be understood that other embodiments are not limited thereon.
  • the mobile device (100) may include less or more number of components.
  • the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention.
  • One or more components can be combined together to perform same or substantially similar function to control the screen of the mobile device (100) based on the position of the flip-cover (10) connected to the mobile device (100).
  • FIG. 2b illustrates a block diagram of the screen controller (160) for controlling the screen of the mobile device (100) based on the position of the flip-cover (10) connected to the mobile device (100), according to an embodiment as disclosed herein.
  • the screen controller (160) includes a flip-cover position detector (160a), and the ML model (160b).
  • the flip-cover position detector (160a) determines the position of the flip-cover (10) based on the data received from the plurality of sensors (150) deployed in the mobile device (100) using the machine learning model (160b), where the status of the position of the flip-cover (10) is one of the open position and close position. Further, the flip-cover position detector (160a) automatically switch ON the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the open position. Further, the flip-cover position detector (160a) automatically switch OFF the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the close position.
  • the ML model (160b) obtains probability of the at least one of mutual hover data and mutual touch data by using Convolutional Neural Network (CNN) classifier and obtains probability of the at least one of the magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant by using a rule-based classifier. Further, the ML model (160b) combines the obtained probability and detects the position of the flip-cover (10) based on the combined probability.
  • CNN Convolutional Neural Network
  • the flip-cover position detector (160a) determines whether the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability. Further, the flip-cover position detector (160a) determines whether the lux data does meet the lux data probability in response to determining that the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability. Further, the flip-cover position detector (160a) detects the flip-cover (10) in the close position in response to determining that the mutual data does not meet the mutual data probability and the magnetometer data does not meet the magnetometer probability.
  • the flip-cover position detector (160a) determines whether the proximity of the surface of the flip-cover (10) is not zero in response to determining that the lux data does meet the lux data probability. Further, the flip-cover position detector (160a) detects the flip- cover (10) in the close position in response to determining that the lux data does not meet the lux data probability.
  • the flip-cover position detector (160a) detects the flip-cover (10) in the open position in response to determining that the proximity of the surface of the flip-cover (10) is not zero. Further, the flip-cover position detector (160a) detects the flip-cover (10) in the close position in response to determining that the proximity of the surface of the flip-cover (10) is zero.
  • FIG. 2b shows various hardware components of the screen controller (160) but it is to be understood that other embodiments are not limited thereon.
  • the screen controller (160) may include less or more number of components.
  • the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention.
  • One or more components can be combined together to perform same or substantially similar function to control the screen of the mobile device (100) based on the position of the flip-cover (10) connected to the mobile device (100).
  • FIG. 3 is a flow diagram (300) illustrating various operations for controlling the screen of the mobile device (100) based on the position of the flip-cover (10) connected to the mobile device (100), according to an embodiment as disclosed herein.
  • the operations (302-310) are performed by the mobile device (100).
  • the method includes obtaining data from the plurality of sensors (150) deployed in the mobile device (100).
  • the method includes determining the position of the flip-cover (10) based on the data received from the plurality of sensors (150) deployed in the mobile device (100) using the ML model (160b), where the status of the position of the flip-cover (10) is one of the open position and close position.
  • the method includes automatically switching ON the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the open position.
  • the method includes automatically switching OFF the screen of the mobile device (100) in response to determining that the flip-cover (10) is in the close position.
  • FIG. 4 is a flow diagram (400) illustrating various operations for detecting the position of the flip-cover (10), according to an embodiment as disclosed herein.
  • the operations (402-416) are performed by the mobile device (100).
  • the method includes determining the mobile device (100) is in idle state.
  • the method includes detecting flip-cover (10) current state is known (e.g. close, open, back-side).
  • the method includes determining whether the mutual data does meet the mutual data probability (i.e. mutual data threshold) and the magnetometer data does meet the magnetometer probability (i.e. magnetometer threshold).
  • the method includes determining whether the lux data does meet the lux data probability (i.e. lux data threshold) in response to determining that the mutual data does meet the mutual data probability and the magnetometer data does meet the magnetometer probability.
  • the method includes determining whether the proximity of the surface of the flip-cover (10) is not zero in response to determining that the lux data does meet the lux data probability.
  • the method includes detecting the flip-cover (10) in the open position in response to determining that the proximity of the surface of the flip-cover (10) is not zero.
  • the method includes at least one of detecting the flip-cover (10) in the close position in response to determining that the mutual data does not meet the mutual data probability and the magnetometer data does not meet the magnetometer probability, detecting the flip-cover (10) in the close position in response to determining that the lux data does not meet the lux data probability, and detecting the flip-cover (10) in the close position in response to determining that the proximity of the surface of the flip-cover (10) is zero.
  • FIG. 5a is a flow diagram (500) illustrating various operations for training ML model (160b) to determine the position of the flip-cover (10) based on the data received from the plurality of sensors (150) deployed in the mobile device (100), according to an embodiment as disclosed herein.
  • the method includes obtaining data from the plurality of sensors (150) deployed in the mobile device (100), the detailed description for the step-502 is given in the FIG. 5b.
  • the method includes data preprocessing (i.e. Normalization) on the obtained data from the plurality of sensors (150) deployed in the mobile device (100), the detailed description for the step-504 is given in the FIG. 5c.
  • the method includes obtaining at least one of mutual hover data and mutual touch data from at least one first sensor from the plurality of sensors (150) deployed in the mobile device (100).
  • the method includes obtaining at least one of the magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant from at least one second sensor from the plurality of sensors (150) deployed in the mobile device (100).
  • the method includes applying the machine learning model (i.e. CNN classifier) on at least one of mutual hover data and mutual touch data to obtain the probability of the at least one of mutual hover data and mutual touch data.
  • the method includes applying the machine learning model on at least one of the magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant to obtain the probability of the at least one of magnetometer data, the proximity of the surface of the flip-cover (10) to the screen of the mobile device (100), the lux variant.
  • the method includes combing the obtained probability based on the ML model (160b).
  • the method includes detecting the position of the flip-cover (10) based on the combined probability.
  • FIG. 5b is a flow diagram (502) illustrating various operations for data preparation to determine the position of the flip-cover (10), according to an embodiment as disclosed herein.
  • the method includes creating a dataset using conventional Hall sensor to collect true data set points while recording values of magnetometer, lux, proximity sensor, azimuth, pitch, roll, mutual data index (hover), mutual data index (touch) to train the ML model (160b).
  • the method includes collecting the data set for a different type of flip-cover (10) (e.g. leather, plastic, silicone, clear mirror, plating mirror) of the mobile device (100).
  • the method includes performing different scenarios for collecting data sets.
  • the method includes storing the collected data set in the memory (110).
  • FIG. 5c illustrates an example of data preprocessing to determine the position of the flip-cover (10), according to an embodiment as disclosed herein.
  • the screen controller (160) obtains data from the plurality of sensors (150) deployed in the mobile device (100). Every column represents different values of the plurality of sensors (150) deployed in the mobile device (100). For example, magnetometer values, lux values, proximity values, azimuth values, pitch values, roll values, the different scenarios for collecting data sets, an angle of flip-cover (10), a flip-cover (10) action, and a mutual data index.
  • FIG. 6a-6b illustrates the mutual data model (160ba) for detecting the position of the flip-cover (10), according to an embodiment as disclosed herein.
  • the notation “a” indicates that the mutual data index (touch) when the flip-cover (10) is in the open position.
  • Normal human (i.e. the user of the mobile device (100)) touch is sensed and considered that the mobile device (100)) is being used and hence flip-cover (10) is in the open position.
  • Mutual data touch depends on the resistance of the conductive object that comes in contact with the screen of the mobile device (100).
  • the notation “b” indicates that the mutual data index (touch) when the flip-cover (10) is in the close position. The whole screen of the mobile device (100) gets activated and gets a lot larger values which are thus used to flip-cover (10) is in the close position.
  • the notation “c” indicates that the mutual data index (hover) when the flip-cover (10) is in the open position. Based on the hover data of the mobile device (100) detecting the flip-cover (10) in the open position or in the close position.
  • the notation “d” indicates that the mutual data index (hover) when the flip-cover (10) is in the close position.
  • FIG. 7a-7c illustrates various operations for the sensor data model (160bb) for detecting the position of the flip-cover (10), according to an embodiment as disclosed herein.
  • the method includes obtaining data from the plurality of sensors (150) deployed in the mobile device (100).
  • the method includes calculating (i.e. impurity) the best feature of sensor values to get maximum split of dataset by any rule, and for creating maximum entropy change. Based on maximum entropy change value, gets a value which properly segregates the data to be in different classes.
  • the method includes making a rule to split the dataset.
  • the method includes determining a depth of tree.
  • the method includes making 100 such classifier when the depth of tree is more than six.
  • the method includes extracting 50 trees structure in in order format in a file (i.e. extracting tree format into some text file, which makes computation faster).
  • the method includes accessing a file in real-time to apply rules.
  • the method includes predicting the probability of the at least one sensor based on the majority voting of 100 trees.
  • FIG. 7b-7c taking all values of every column in ascending order (Referring to a Table.2) and taking midpoint of every consecutive values as a split point. So in Lux, possible split values are 0.03 and 0.06. Here, total dataset size is 6. Split the dataset using Lux column value,
  • Imp a 2 + b 2 ("a" represents number of 0s and "b” represents number of 1s))
  • the one with maximum score is chosen as a splitting rule at a node. Same process is repeated until either height of tree is grater and equal than maximum depth value (i.e. user input) or achieve perfect split score is 1.0.
  • FIG. 8 example illustrating a secure lock using flip-cover (10) and determining magnetic field variation for various gesture of user of the mobile device (100), according to an embodiment as disclosed herein.
  • the notation "a” indicates that secure lock using flip-cover (10).
  • Movable magnet e.g. moving object
  • Movement of the magnet or metal produces a different kind of magnetic field on a different area over the display (140) of the mobile device (100).
  • the magnetic sensor generates different magnitude values based on a distance between the magnetic sensor and the flip-cover (10) of the mobile device (100).
  • the different magnitude values stores in a two-dimensional (2D) array form which can compare actual pattern and support flip-cover (10) based secure lock pattern.
  • the different magnitude values for a different application can be extended, for example, but not limited to an air signature application, or a like.
  • a pattern of the magnetic field sensed values (X variations, Y variations, Z variations) is recorded and mapped to the ML- model (160b) for authentication of the user of the mobile device (100).
  • the notation "b1" and "b2" indicates that determining magnetic field variation for a various gesture of user of the mobile device (100). For example, performing gestures from left side to right side.
  • the magnetic field variation determines and performs a call application in response to determining that performing gesture from left side to right side.
  • the magnetic field variation determines for a different application that can be extended, for example, but not limited to a gallery application, a camera application, a message application, a social media application, etc.
  • a moving object e.g. various gesture of user of the mobile device (100)
  • the magnetic field varies based on the position (e.g. nearby, far away) of the moving object. Further, a variety of operations are performed on the basis of magnetic field variation. Further, the same example can apply to other devices, such as Internet of things (IOT) devices.
  • IOT Internet of things
  • the embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé de commande d'un écran d'un dispositif mobile sur la base d'une position du couvercle rabattable connecté au dispositif mobile. Ledit procédé comprend l'obtention, par le dispositif mobile, de données en provenance d'une pluralité de capteurs déployés dans le dispositif mobile, et la détermination, par le dispositif mobile, de la position du couvercle rabattable sur la base des données, reçues de la pluralité de capteurs déployés dans le dispositif mobile, à l'aide d'un modèle d'apprentissage automatique, l'état de la position du couvercle rabattable étant une position ouverte ou une position fermée.
PCT/KR2021/002201 2020-07-22 2021-02-22 Procédé et dispositif mobile pour commander un écran d'un dispositif mobile sur la base de la position du couvercle rabattable WO2022019427A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041031319 2020-07-22
IN202041031319 2020-07-22

Publications (1)

Publication Number Publication Date
WO2022019427A1 true WO2022019427A1 (fr) 2022-01-27

Family

ID=79729207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/002201 WO2022019427A1 (fr) 2020-07-22 2021-02-22 Procédé et dispositif mobile pour commander un écran d'un dispositif mobile sur la base de la position du couvercle rabattable

Country Status (1)

Country Link
WO (1) WO2022019427A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011182184A (ja) * 2010-03-01 2011-09-15 Nec Corp 携帯端末装置、及び該携帯端末装置における表示制御方法
WO2012036710A1 (fr) * 2010-09-17 2012-03-22 Apple Inc. Dispositif doté d'un couvercle pliable et interface utilisateur de ce dispositif
US20150103022A1 (en) * 2013-10-15 2015-04-16 Lg Electronics Inc. Terminal and operating method thereof
US20150133183A1 (en) * 2013-11-13 2015-05-14 Motorola Mobility Llc Method, Electronic Device, and Accessory for Carrying Out Functions Based on Reflected Electromagnetic Radiation
US20160255256A1 (en) * 2013-05-07 2016-09-01 Lg Electronics Inc. Terminal case, mobile terminal, and mobile terminal assembly including the terminal case and the mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011182184A (ja) * 2010-03-01 2011-09-15 Nec Corp 携帯端末装置、及び該携帯端末装置における表示制御方法
WO2012036710A1 (fr) * 2010-09-17 2012-03-22 Apple Inc. Dispositif doté d'un couvercle pliable et interface utilisateur de ce dispositif
US20160255256A1 (en) * 2013-05-07 2016-09-01 Lg Electronics Inc. Terminal case, mobile terminal, and mobile terminal assembly including the terminal case and the mobile terminal
US20150103022A1 (en) * 2013-10-15 2015-04-16 Lg Electronics Inc. Terminal and operating method thereof
US20150133183A1 (en) * 2013-11-13 2015-05-14 Motorola Mobility Llc Method, Electronic Device, and Accessory for Carrying Out Functions Based on Reflected Electromagnetic Radiation

Similar Documents

Publication Publication Date Title
US9235278B1 (en) Machine-learning based tap detection
CN110222789B (zh) 图像识别方法及存储介质
EP3622375A1 (fr) Procédé et dispositif pouvant être porté permettant d'effectuer des actions à l'aide d'un réseau de capteurs corporels
WO2021102655A1 (fr) Procédé de formation de modèle de réseau, procédé et appareil de reconnaissance de propriétés d'image et dispositif électronique
WO2019054846A1 (fr) Procédé d'interaction dynamique et dispositif électronique associé
WO2022055099A1 (fr) Procédé de détection d'anomalies et dispositif associé
WO2022065682A1 (fr) Dispositif habitronique et son procédé de commande
WO2019235653A1 (fr) Procédé et système de reconnaissance de connaissance proche sur la base d'une communication sans fil à courte portée et support d'enregistrement non transitoire lisible par ordinateur
CN111797861A (zh) 信息处理方法、装置、存储介质及电子设备
WO2022158819A1 (fr) Procédé et dispositif électronique pour déterminer une saillance de mouvement et un style de lecture vidéo dans une vidéo
CN105531645B (zh) 抓握检测的校准
CN113505256A (zh) 特征提取网络训练方法、图像处理方法及装置
WO2022139327A1 (fr) Procédé et appareil de détection d'énoncés non pris en charge dans la compréhension du langage naturel
JP2016529780A (ja) マルチセンサ手検出
WO2022019427A1 (fr) Procédé et dispositif mobile pour commander un écran d'un dispositif mobile sur la base de la position du couvercle rabattable
WO2021141474A1 (fr) Procédé et dispositif de lancement d'application en arrière-plan
CN111695419B (zh) 一种图像数据处理方法及相关装置
US20230342022A1 (en) Method and electronic device for providing control functionality of display
WO2024101466A1 (fr) Appareil et procédé de suivi de personne disparue basé sur des attributs
WO2021125550A1 (fr) Dispositif électronique et procédé de commande du dispositif électronique
WO2022149665A1 (fr) Procédé et dispositif électronique pour zoom intelligent de caméra
CN111796663B (zh) 场景识别模型更新方法、装置、存储介质及电子设备
EP3975111A1 (fr) Dispositif de détection d'objet, système de détection d'objet, dispositif de détection d'objet, programme et support d'enregistrement
WO2020207294A1 (fr) Procédé et appareil de traitement de service, support de mémoire et dispositif électronique
CN111797656B (zh) 人脸关键点检测方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21845210

Country of ref document: EP

Kind code of ref document: A1