WO2022019436A1 - Mobile device operation during screen block event of mobile device and method thereof - Google Patents

Mobile device operation during screen block event of mobile device and method thereof Download PDF

Info

Publication number
WO2022019436A1
WO2022019436A1 PCT/KR2021/003990 KR2021003990W WO2022019436A1 WO 2022019436 A1 WO2022019436 A1 WO 2022019436A1 KR 2021003990 W KR2021003990 W KR 2021003990W WO 2022019436 A1 WO2022019436 A1 WO 2022019436A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
input
application
sensor
screen block
Prior art date
Application number
PCT/KR2021/003990
Other languages
French (fr)
Inventor
Choice CHOUDHARY
Sunil Rathour
Ankit Agarwal
Sobita CHOUDHARY
Mrinal MALIK
Harshit Oberoi
Vanshaj BEHL
Aditya Singh
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022019436A1 publication Critical patent/WO2022019436A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1662Details related to the integrated keyboard
    • G06F1/1671Special purpose buttons or auxiliary keyboards, e.g. retractable mini keypads, keypads or buttons that remain accessible at closed laptop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/169Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated pointing device, e.g. trackball in the palm rest area, mini-joystick integrated between keyboard keys, touch pads or touch stripes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer

Definitions

  • the present disclosure relates to a mobile device, and more specifically related to a method for operating the mobile device during a screen block event associated with a touch screen of the mobile device.
  • a mobile device with a touch screen cannot be used inside water due to touch screen limitations in an underwater environment.
  • Hardware buttons are available in the mobile device to operate in the underwater environment but the hardware buttons are provided with restricted functionally due limitation of mechanical parts. Further, if a screen block event is occurred on the touch screen of the mobile device due to water droplets on the touch screen of the mobile device, a damage of a portion of the touch screen of the mobile device, or hanging of the mobile device, the user of the mobile device faces issues in using the touch screen of the mobile device. This results reducing the user experience in an underwater environment.
  • the principal object of the embodiments herein is to provide a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device without requiring periodic calibration in the mobile device. More specifically the proposed method allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage or a portion of the touch screen of the mobile device is submerged under water. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device.
  • Another object of the embodiment herein is to automatically configure the mobile device in a screen block mode, when the screen block event is detected, so as to configure one or more sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device in the screen block mode during the screen block event.
  • Another object of the embodiment herein is to identify one or more application to be perform an action on the application based on user input without using any external hardware.
  • Another object of the embodiment herein is to operate the mobile device during the screen block event associated with the touch screen of the mobile device by using an application prioritization and less user effort required for performing an action in the screen block event and an intuitive indicator placed at most preferred functionality to save user effort, time and power.
  • a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device includes detecting, by the mobile device, the screen block event associated with the touch screen of the mobile device. Further, the method includes automatically configuring the mobile device in a screen block mode using a machine learning model, when the event is detected. The screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the method includes receiving, by the mobile device, a first input on the at least one configured associated with the mobile device when the event is detected. Further, the method includes identifying, by the mobile device, at least one application to be perform an action on the application based on the first input. Further, the method includes receiving, by the mobile device, a second input on the at least one configured sensor. Further, the method includes performing, by the mobile device, the action on the at least one identified application in the mobile device based on the second input.
  • the action is one of launching the at least one identified application in the mobile device based on the second input and activating the at least one identified application in the mobile device based on the second input.
  • the screen block mode configures the sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device during the screen block event.
  • configuring, by the mobile device, the at least one configured sensor associated with the mobile device to enable operation of the mobile device during the screen block event includes determining, by the mobile device, at least one of an orientation of the mobile device, determining, by the mobile device, whether the orientation of the mobile device indicates one of a landscape mode and a portrait mode, and performing, by the mobile device, at least one of: configuring the at least one sensor placed at the front panel of the mobile device based on at least one of the orientation of the mobile device in response to detecting the orientation of the mobile device indicates the landscape mode, and configuring the at least one sensor placed at the back panel of the mobile device based on at least one of the orientation of the mobile device in response to detecting the orientation of the mobile device indicates the portrait mode.
  • identifying, by the mobile device, the at least one application to be perform the action on the at least one application based on the first input includes determining, by the mobile device, motion data from the first input performed on the at least one sensor, mapping, by the mobile device, the motion data to at least one portion of the touch screen of the mobile device, and displaying, by the mobile device, an indicator indicating the at least one application on the touch screen of the mobile device based the mapping.
  • the motion data includes at least one of a direction of the first input performed on the at least one sensor, a distance covered by the first input performed on the at least one sensor, an orientation of the mobile device, a rate of motion of the first input performed on the at least one sensor, and time unit of the first input performed on the at least one configured sensor.
  • the indicator dynamically moves to navigate across the at least one application of the mobile device based on the motion data extracted from the first input performed on the at least one sensor.
  • the at least one application on the touch screen of the mobile device is determined based on at least one a frequently used application, a preconfigured application, a recently used application, a location of the user and a type of the screen block event.
  • the screen block event is caused due to at least one of water droplets on the touch screen of the mobile device, a damage of at least one portion of the touch screen of the mobile device, hanging of the mobile device and a portion of the touch screen of the mobile device submerged under water.
  • a mobile device for handling a screen block event associated with a touch screen of the mobile device.
  • the mobile device includes a screen block event controller coupled with a memory and a processor.
  • the screen block event controller is configured to detect the screen block event associated with the touch screen of the mobile device. Further, the screen block event controller is configured to automatically configure the mobile device in a screen block mode using a machine learning model, when the event is detected.
  • the screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the screen block event controller is configured to receive a first input on the at least one configured sensor associated with the mobile device when the event is detected.
  • the screen block event controller is configured to identify at least one application to be perform an action on the at least one application based on the first input. Further, the screen block event controller is configured to receive a second input on the at least one configured sensor. Further, the screen block event controller is configured to perform an action on the at least one identified application in the mobile device based on the second input.
  • the principal object of the embodiments herein is to provide a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device without requiring periodic calibration in the mobile device. More specifically the proposed method allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage or a portion of the touch screen of the mobile device is submerged under water. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device.
  • Another object of the embodiment herein is to automatically configure the mobile device in a screen block mode, when the screen block event is detected, so as to configure one or more sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device in the screen block mode during the screen block event.
  • Another object of the embodiment herein is to identify one or more application to be perform an action on the application based on user input without using any external hardware.
  • Another object of the embodiment herein is to operate the mobile device during the screen block event associated with the touch screen of the mobile device by using an application prioritization and less user effort required for performing an action in the screen block event and an intuitive indicator placed at most preferred functionality to save user effort, time and power.
  • FIG. 1 illustrates various hardware components of a mobile device, according to an embodiment as disclosed herein.
  • FIG. 2a and FIG. 2b are a flow diagram illustrating a method for operating the mobile device during a screen block event associated with a touch screen of the mobile device, according to an embodiment as disclosed herein.
  • FIG. 3a is an example scenario in which various sensors placed in a front panel of the mobile device is depicted, according to an embodiment as disclosed herein.
  • FIG. 3b is an example scenario in which various sensors placed in a back panel of the mobile device is depicted, according to an embodiment as disclosed herein.
  • FIGS. 4a-4e illustrate an example scenario in which an the mobile phone is operated under water to capture a photo, according to an embodiment as disclosed herein.
  • FIGS. 5a-5f illustrate an example scenario in which the mobile phone is operated under water to perform a video call, according to an embodiment as disclosed herein.
  • FIGS. 6a-6f illustrate an example scenario in which the mobile phone is operated under an oil environment to perform a phone call, according to an embodiment as disclosed herein.
  • FIG. 7 illustrates an example scenario in which the mobile device is operated in a landscape mode, according to an embodiment as disclosed herein.
  • FIG. 8 illustrates an example scenario in which the mobile phone is operated under a chemical refinery environment to perform a video call, according to an embodiment as disclosed herein.
  • FIG. 9 illustrates an example scenario in which the mobile phone is operated under the chemical refinery environment to capture the photo, according to an embodiment as disclosed herein.
  • FIGS. 10a-10c illustrate an example scenario in which the mobile device is operated while hanging the mobile device, according to an embodiment as disclosed herein.
  • FIGS. 11a-11b illustrate an example scenario in which the mobile device is operated while the touch screen of the mobile device is damaged, according to an embodiment as disclosed herein.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention
  • the method includes detecting, by the mobile device, the screen block event associated with the touch screen of the mobile device. Further, the method includes automatically configuring the mobile device in a screen block mode using a machine learning model, when the event is detected.
  • the screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the method includes receiving, by the mobile device, a first input on the at least one configured sensor associated with the mobile device when the event is detected. Further, the method includes identifying, by the mobile device, at least one application to be perform an action on the at least one application launched based on the first input. Further, the method includes receiving, by the mobile device, a second input on the at least one configured sensor. Further, the method includes perform an action on the at least one identified application in the mobile device based on the second input.
  • the method can be used to operate the mobile device inside the water without using any special hardware and without requiring periodic calibration in the mobile device. More specifically the proposed invention allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device. The user of the sensors generally provides effective results as the sensors reading does not vary based on depth and a type of liquid such as water, oil or the like.
  • FIGS. 1 through 11b there are shown preferred embodiments.
  • FIG. 1 illustrates various hardware components of a mobile device (100), according to an embodiment as disclosed herein.
  • the mobile device (100) can be, for example, but not limited to a cellular phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of Things (IoT), a virtual reality device, an immersive device, and a smart watch.
  • PDA Personal Digital Assistant
  • IoT Internet of Things
  • virtual reality device an immersive device
  • immersive device an immersive device
  • the mobile device (100) includes a processor (102), a communicator (104), a memory (106), a touch screen (108), a front panel (110), a back panel (112), a plurality of sensors (114a-114e), an edge panel (116), and a screen block event controller (118).
  • the sensor (114a, 114b and 114e) is placed at the front panel (110) as shown in the FIG. 3a.
  • the sensor (114c, 114d, and 114f-114h) is placed at the back panel (112) of the mobile device.
  • the processor (102) is coupled with the communicator (104), the memory (106), the touch screen (108), the front panel (110), the back panel (112), the sensor (114a-114h), the application, the edge panel (116), and the screen block event controller (118).
  • the screen block event controller (118) is configured to detect a screen block event associated with the touch screen (108) of the mobile device (100).
  • the screen block event is caused due to water droplets on the touch screen (108) of the mobile device (100), a damage of a portion of the touch screen (108) of the mobile device (100), hanging of the mobile device (100) due to processing capabilities of the mobile device (100) and a portion of the touch screen (108) of the mobile device (100) submerged under water.
  • the screen block event controller (118) Based on detecting the screen block event associated with the touch screen (108) of the mobile device (100), the screen block event controller (118) automatically configures the mobile device (100) in a screen block mode using a machine learning model.
  • the screen block mode configures the sensor (114a, 114b, and 114e) associated with the mobile device (100) to enable operation of the mobile device (100) during the screen block event.
  • the screen block mode configures the sensor (114a, 114b, and 114e) placed at the front panel (110) and the sensor (114c, 114d, and 114f-114h) placed at the back panel (112) of the mobile device (100) to enable operation of the mobile device (100) during the screen block event.
  • the screen block event controller (118) is configured to determine an orientation of the mobile device (100) and an angle of the mobile device (100). Further, the screen block event controller (118) is configured to determine whether the orientation of the mobile device (100) indicates a landscape mode or a portrait mode. In an example, the mobile device (100) is operated in the portrait mode depicted in the FIG. 4a-4e. In an example, the mobile device (100) is operated in the landscape mode depicted in the FIG. 7.
  • the screen block event controller (118) configures the sensor (114a-114b, and 114e) placed at the front panel (110) of the mobile device (100) based on the orientation of the mobile device (100) and the angle of the mobile device (100). In response to detecting the orientation of the mobile device (100) indicates the portrait mode, the screen block event controller (118) configures the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) based on the orientation of the mobile device (100) and the angle of the mobile device (100).
  • the screen block event controller (118) is configured to receive a first input on the configured sensor (114a,114b, and 114e) placed at the front panel (110) and the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) when the event is detected.
  • the first input can be, for example, but not limited to a tap input, a long press input, and a swipe input.
  • the screen block event controller (118) is configured to identify the application to be launched.
  • the application can be, for example, but not limited to a camera application, a social networking application, a finance application, a chat application, and a health related application.
  • the screen block event controller (118) is configured to identify the application to be launched based on an exposure value of the sensor (114a-114h) and a lux value of the sensor (114a-114h).
  • the screen block event controller (118) is configured to identify the application to be launched based on a probabilistic classification.
  • the probabilistic classification is used to provide probability to each identifier of the sensor (114a-114h) corresponding to a features extracted from exposure values of all the sensors (114a-114h) provided in the back panel (112) or the front panel (110) of the mobile device (100) using the equation (1) and equation (2)
  • X) represents the probability of camera identifier (ID) given the values of the exposures
  • P(C(a)) represents the probability of the camera ID to be selected
  • C(a)) represents the probability of the features when certain camera ID is being selected
  • X x1,x2,x3.... represents the features mentioned.
  • the screen block event controller (118) is configured to identify the at least one application to be launched based on below equations (3-5).
  • the screen block event controller (118) is configured to determine motion data from the first input performed on the configured sensor (114a-114h) and map the motion data to the portion of the touch screen (108) of the mobile device (100).
  • the user of the electronic device (100) provides long press on the front sensor (114a), the front sensor (114a) will detects the long press action from the user and accordingly map the portion of the touch screen (108).
  • the screen block event controller (118) is configured to display an indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping.
  • the motion data includes a direction of the first input performed on the configured sensor (114a-114h), a distance covered by the first input performed on the configured sensor (114a-114h), an orientation of the mobile device (100), a rate of motion of the first input performed on the configured sensor (114a-114h), and time unit of the first input performed on the at least one configured sensor (114a-114h).
  • the motional data is determined by a right to left direction of the user input performed on the sensor (114a-114h) of the front panel (112), the distance covered by the first input performed on the sensor (114a), the landscape mode of the mobile device (100), and speed of the user input performed on the sensor (114a) of the front panel (112).
  • the indicator dynamically moves to navigate across the applications of the mobile device (100) based on the motion data extracted from the first input performed on the at least one configured sensor (114a-114h).
  • the indicator dynamically moves to navigate across the applications of the mobile device (100) based on the below equations:
  • W Weights/coefficients of each parameter X and B is the bias.
  • W is a parametric vector used to predict the speed as accurately as possible and the equation (7) is the loss function used to train the weight parameters to the best it can so that this loss is minimum.
  • the application on the touch screen (108) of the mobile device (100) is determined based on a frequently used application, a preconfigured application, a recently used application, a location of the user and a type of the screen block event.
  • a frequently used application e.g., a preconfigured application
  • a recently used application e.g., a recently used application
  • a location of the user e.g., a location of the user
  • a type of the screen block event e.g., a frequently used application, a recently used application, a location of the user and a type of the screen block event.
  • the frequently used application is the first type of the social networking application.
  • the preconfigured application is already installed application in the mobile device during an original equipment manufacturer (OEM).
  • the screen block event controller (118) is configured to receive a second input on the configured sensor (114a-114h) and launch the identified application in the mobile device (100) based on the second input.
  • the second input can be, for example, but not limited to a tap input, a long press input, and a swipe input.
  • the mobile device (100) is in an underwater environment (300).
  • the senor (114a) is configured to be used for detecting a touch input on the front sensor (114a) as shown in the FIG. 4a.
  • the mobile device (100) selects a first application (i.e., gallery application) as shown in the FIG. 4b.
  • the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the front camera (114b).
  • the direction is a horizontal direction.
  • the mobile device (100) selects the second application (i.e., camera application) as shown in the FIG. 4c and FIG. 4d.
  • the mobile device (100) revives the long press input on the front camera (114b), the electronic device opens the camera to capture the picture based on the long press input as shown in the FIG. 4e.
  • the mobile device (100) is operated in the underwater environment (300).
  • the back senor (114c) is configured to be used for detecting the touch input on the back sensor (114c) placed on the back panel (112) as shown in the FIG. 5a.
  • the mobile device (100) selects a first application (i.e., camera application) as shown in the FIG. 5b.
  • a first application i.e., camera application
  • the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the back camera (114g).
  • the direction is a vertical direction.
  • the mobile device (100) Based on the touch input received on the sensor (114g), the mobile device (100) selects the second application (i.e., video calling application) as shown in the FIGS. 5c and 5d. After selecting the second application, the mobile device (100) revives the tap input on the sensor (114h), the mobile device (100) opens the video calling application to establish the video call with other user based on the tap press input as shown in the FIGS. 5e and 5f.
  • the second application i.e., video calling application
  • the user of the mobile device (100) can operate a social networking application.
  • the user of the mobile device (100) can control device functionality e.g., changing camera mode, switching between front and back camera etc.
  • the mobile device (100) is operated in the oil environment (600).
  • the back senor (114c) is configured to be used for detecting the touch input on the back sensor (114c) placed on the back panel (112) as shown in the FIG. 6a.
  • the mobile device (100) selects a first application (i.e., news application) as shown in the FIG. 6b.
  • a first application i.e., news application
  • the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the back camera (114d).
  • the direction is the vertical direction.
  • the mobile device (100) Based on the touch input received on the sensor (114d), the mobile device (100) selects the second application (i.e., calling application) as shown in the FIGS. 6c and 6d. After selecting the second application, the mobile device (100) receives the tap input on the sensor (114d), the mobile device (100) opens the calling application to establish the call with other user based on the tap press input as shown in the FIGS. 6e and 6f.
  • the second application i.e., calling application
  • the user of the mobile phone (100) is operated under a chemical refinery environment (100) to perform a video call to communicate with other users as well without hampering user experience.
  • the mobile phone (100) is operated under the chemical refinery environment (800) to capture the photo to show to other users, according to an embodiment as disclosed herein;
  • the touch screen (108) of the mobile device (100) is broken due to which an user interface is not visible, then in that case it is not possible to perform functionality e.g. answer the call, reject the call, then in such scenario's functionality can be performed using combination of sensors (114a-114b) using the proposed methods.
  • the processor (102) is configured to execute instructions stored in the memory (106) and to perform various processes.
  • the communicator (104) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
  • the memory (106) stores instructions to be executed by the processor (102).
  • the memory (106) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • the memory (106) may, in some examples, be considered a non-transitory storage medium.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (106) is non-movable.
  • the memory (106) can be configured to store larger amounts of information than the memory.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • RAM Random Access Memory
  • At least one of the plurality of hardware components may be implemented through an artificial intelligent (AI) model.
  • a function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor (102).
  • the processor (102) may include one or a plurality of processors.
  • one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
  • CPU central processing unit
  • AP application processor
  • GPU graphics-only processing unit
  • VPU visual processing unit
  • NPU neural processing unit
  • the one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory.
  • the predefined operating rule or artificial intelligence model is provided through training or learning.
  • learning means that, by applying a learning algorithm to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made.
  • the learning may be performed in a device itself in which AI according to an embodiment is performed, and/o may be implemented through a separate server/system.
  • the AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights.
  • Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
  • the learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction.
  • Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • FIG. 1 shows various hardware components of the mobile device (100) but it is to be understood that other embodiments are not limited thereon.
  • the mobile device (100) may include less or more number of components.
  • the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention.
  • One or more components can be combined together to perform same or substantially similar function to handle the screen block event associated with the touch screen (108) of the mobile device (100).
  • FIG. 2 is a flow diagram (S200) illustrating a method for operating the mobile device (100) during the screen block event associated with the touch screen (108) of the mobile device (100), according to an embodiment as disclosed herein.
  • the operations (S202-S226) are performed by the screen block event controller (118).
  • the method includes detecting the screen block event associated with the touch screen (108) of the mobile device (100).
  • the method includes automatically configuring the mobile device (100) in the screen block mode, when the screen block event is detected.
  • the method includes determining the orientation of the mobile device (100) and the angle of the mobile device (100).
  • the method includes determining whether the orientation of the mobile device (100) indicates the landscape mode or the portrait mode.
  • the method includes configuring the sensor (114a, 114b and 114e) placed at the front panel (110) of the mobile device (100) based on the orientation of the mobile device (100).
  • the method includes configuring the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) based on the orientation of the mobile device (100).
  • the method includes receiving the first input on the configured sensor (114) placed at the front panel (110) of the mobile device (100).
  • the method includes receiving the first input on the configured sensor (114c or 114d) placed at the back panel (112) of the mobile device (100)
  • the method includes determining motion data from the first input performed on the configured sensor (114a-114h).
  • the method includes mapping the motion data to the portion of the touch screen (108) of the mobile device (100).
  • the method includes detecting the indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping. a change in lux value and an exposure value is greater than a predefined threshold.
  • the predefined threshold is configured by the user or the OEM.
  • the method includes displaying the indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping.
  • the method includes receiving the second input on the configured sensor (114a-114h).
  • the method includes launching the at least one identified application in the mobile device (100) based on the second input.
  • the embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.

Abstract

Embodiments herein disclose a method for operating a mobile device (100) during a screen block event associated with a touch screen (108) of the device (100). The method includes automatically configuring the device (100) in a screen block mode using a machine learning model, when the event is detected. The mode configures at least one sensor (114) associated with the device (100) to enable operation of the device (100) during the event. Further, the method includes receiving a first input on the at least one configured sensor (114). Further, the method includes identifying at least one application to be perform an action on the at least one application based on the first input. Further, the method includes receiving a second input on the at least one sensor (114). Further, the method includes performing the action on the at least one identified application based on the second input.

Description

MOBILE DEVICE OPERATION DURING SCREEN BLOCK EVENT OF MOBILE DEVICE AND METHOD THEREOF
The present disclosure relates to a mobile device, and more specifically related to a method for operating the mobile device during a screen block event associated with a touch screen of the mobile device.
A mobile device with a touch screen cannot be used inside water due to touch screen limitations in an underwater environment. Hardware buttons are available in the mobile device to operate in the underwater environment but the hardware buttons are provided with restricted functionally due limitation of mechanical parts. Further, if a screen block event is occurred on the touch screen of the mobile device due to water droplets on the touch screen of the mobile device, a damage of a portion of the touch screen of the mobile device, or hanging of the mobile device, the user of the mobile device faces issues in using the touch screen of the mobile device. This results reducing the user experience in an underwater environment.
Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
The principal object of the embodiments herein is to provide a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device without requiring periodic calibration in the mobile device. More specifically the proposed method allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage or a portion of the touch screen of the mobile device is submerged under water. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device.
Another object of the embodiment herein is to automatically configure the mobile device in a screen block mode, when the screen block event is detected, so as to configure one or more sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device in the screen block mode during the screen block event.
Another object of the embodiment herein is to identify one or more application to be perform an action on the application based on user input without using any external hardware.
Another object of the embodiment herein is to operate the mobile device during the screen block event associated with the touch screen of the mobile device by using an application prioritization and less user effort required for performing an action in the screen block event and an intuitive indicator placed at most preferred functionality to save user effort, time and power.
Accordingly embodiments herein disclose a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device. The method includes detecting, by the mobile device, the screen block event associated with the touch screen of the mobile device. Further, the method includes automatically configuring the mobile device in a screen block mode using a machine learning model, when the event is detected. The screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the method includes receiving, by the mobile device, a first input on the at least one configured associated with the mobile device when the event is detected. Further, the method includes identifying, by the mobile device, at least one application to be perform an action on the application based on the first input. Further, the method includes receiving, by the mobile device, a second input on the at least one configured sensor. Further, the method includes performing, by the mobile device, the action on the at least one identified application in the mobile device based on the second input.
In an embodiment, the action is one of launching the at least one identified application in the mobile device based on the second input and activating the at least one identified application in the mobile device based on the second input.
In an embodiment, the screen block mode configures the sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device during the screen block event.
In an embodiment, configuring, by the mobile device, the at least one configured sensor associated with the mobile device to enable operation of the mobile device during the screen block event includes determining, by the mobile device, at least one of an orientation of the mobile device, determining, by the mobile device, whether the orientation of the mobile device indicates one of a landscape mode and a portrait mode, and performing, by the mobile device, at least one of: configuring the at least one sensor placed at the front panel of the mobile device based on at least one of the orientation of the mobile device in response to detecting the orientation of the mobile device indicates the landscape mode, and configuring the at least one sensor placed at the back panel of the mobile device based on at least one of the orientation of the mobile device in response to detecting the orientation of the mobile device indicates the portrait mode.
In an embodiment, identifying, by the mobile device, the at least one application to be perform the action on the at least one application based on the first input includes determining, by the mobile device, motion data from the first input performed on the at least one sensor, mapping, by the mobile device, the motion data to at least one portion of the touch screen of the mobile device, and displaying, by the mobile device, an indicator indicating the at least one application on the touch screen of the mobile device based the mapping.
In an embodiment, the motion data includes at least one of a direction of the first input performed on the at least one sensor, a distance covered by the first input performed on the at least one sensor, an orientation of the mobile device, a rate of motion of the first input performed on the at least one sensor, and time unit of the first input performed on the at least one configured sensor.
In an embodiment, the indicator dynamically moves to navigate across the at least one application of the mobile device based on the motion data extracted from the first input performed on the at least one sensor.
In an embodiment, the at least one application on the touch screen of the mobile device is determined based on at least one a frequently used application, a preconfigured application, a recently used application, a location of the user and a type of the screen block event.
In an embodiment, the screen block event is caused due to at least one of water droplets on the touch screen of the mobile device, a damage of at least one portion of the touch screen of the mobile device, hanging of the mobile device and a portion of the touch screen of the mobile device submerged under water.
Accordingly, embodiments herein disclose a mobile device for handling a screen block event associated with a touch screen of the mobile device. The mobile device includes a screen block event controller coupled with a memory and a processor. The screen block event controller is configured to detect the screen block event associated with the touch screen of the mobile device. Further, the screen block event controller is configured to automatically configure the mobile device in a screen block mode using a machine learning model, when the event is detected. The screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the screen block event controller is configured to receive a first input on the at least one configured sensor associated with the mobile device when the event is detected. Further, the screen block event controller is configured to identify at least one application to be perform an action on the at least one application based on the first input. Further, the screen block event controller is configured to receive a second input on the at least one configured sensor. Further, the screen block event controller is configured to perform an action on the at least one identified application in the mobile device based on the second input.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
The principal object of the embodiments herein is to provide a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device without requiring periodic calibration in the mobile device. More specifically the proposed method allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage or a portion of the touch screen of the mobile device is submerged under water. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device.
Another object of the embodiment herein is to automatically configure the mobile device in a screen block mode, when the screen block event is detected, so as to configure one or more sensor placed at one of a front panel and a back panel of the mobile device to enable operation of the mobile device in the screen block mode during the screen block event.
Another object of the embodiment herein is to identify one or more application to be perform an action on the application based on user input without using any external hardware.
Another object of the embodiment herein is to operate the mobile device during the screen block event associated with the touch screen of the mobile device by using an application prioritization and less user effort required for performing an action in the screen block event and an intuitive indicator placed at most preferred functionality to save user effort, time and power.
FIG. 1 illustrates various hardware components of a mobile device, according to an embodiment as disclosed herein.
FIG. 2a and FIG. 2b are a flow diagram illustrating a method for operating the mobile device during a screen block event associated with a touch screen of the mobile device, according to an embodiment as disclosed herein.
FIG. 3a is an example scenario in which various sensors placed in a front panel of the mobile device is depicted, according to an embodiment as disclosed herein.
FIG. 3b is an example scenario in which various sensors placed in a back panel of the mobile device is depicted, according to an embodiment as disclosed herein.
FIGS. 4a-4e illustrate an example scenario in which an the mobile phone is operated under water to capture a photo, according to an embodiment as disclosed herein.
FIGS. 5a-5f illustrate an example scenario in which the mobile phone is operated under water to perform a video call, according to an embodiment as disclosed herein.
FIGS. 6a-6f illustrate an example scenario in which the mobile phone is operated under an oil environment to perform a phone call, according to an embodiment as disclosed herein.
FIG. 7 illustrates an example scenario in which the mobile device is operated in a landscape mode, according to an embodiment as disclosed herein.
FIG. 8 illustrates an example scenario in which the mobile phone is operated under a chemical refinery environment to perform a video call, according to an embodiment as disclosed herein.
FIG. 9 illustrates an example scenario in which the mobile phone is operated under the chemical refinery environment to capture the photo, according to an embodiment as disclosed herein.
FIGS. 10a-10c illustrate an example scenario in which the mobile device is operated while hanging the mobile device, according to an embodiment as disclosed herein.
FIGS. 11a-11b illustrate an example scenario in which the mobile device is operated while the touch screen of the mobile device is damaged, according to an embodiment as disclosed herein.
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware and software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention
The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
Accordingly embodiments herein achieve a method for operating a mobile device during a screen block event associated with a touch screen of the mobile device. The method includes detecting, by the mobile device, the screen block event associated with the touch screen of the mobile device. Further, the method includes automatically configuring the mobile device in a screen block mode using a machine learning model, when the event is detected. The screen block mode configures at least one sensor associated with the mobile device to enable operation of the mobile device during the screen block event. Further, the method includes receiving, by the mobile device, a first input on the at least one configured sensor associated with the mobile device when the event is detected. Further, the method includes identifying, by the mobile device, at least one application to be perform an action on the at least one application launched based on the first input. Further, the method includes receiving, by the mobile device, a second input on the at least one configured sensor. Further, the method includes perform an action on the at least one identified application in the mobile device based on the second input.
Unlike conventional methods and systems, the method can be used to operate the mobile device inside the water without using any special hardware and without requiring periodic calibration in the mobile device. More specifically the proposed invention allows access to the mobile device when the touch screen of the mobile device is not responding or is blocked/hanged due to spilling of water or damage. Further, the combination of inbuilt sensors apart from a sensor associated with the touch screen is enabled to allow the user to operate the mobile device. The user of the sensors generally provides effective results as the sensors reading does not vary based on depth and a type of liquid such as water, oil or the like.
Referring now to the drawings, and more particularly to FIGS. 1 through 11b, there are shown preferred embodiments.
FIG. 1 illustrates various hardware components of a mobile device (100), according to an embodiment as disclosed herein. The mobile device (100) can be, for example, but not limited to a cellular phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of Things (IoT), a virtual reality device, an immersive device, and a smart watch.
The mobile device (100) includes a processor (102), a communicator (104), a memory (106), a touch screen (108), a front panel (110), a back panel (112), a plurality of sensors (114a-114e), an edge panel (116), and a screen block event controller (118). The sensor (114a, 114b and 114e) is placed at the front panel (110) as shown in the FIG. 3a. The sensor (114c, 114d, and 114f-114h) is placed at the back panel (112) of the mobile device. The processor (102) is coupled with the communicator (104), the memory (106), the touch screen (108), the front panel (110), the back panel (112), the sensor (114a-114h), the application, the edge panel (116), and the screen block event controller (118).
The screen block event controller (118) is configured to detect a screen block event associated with the touch screen (108) of the mobile device (100). The screen block event is caused due to water droplets on the touch screen (108) of the mobile device (100), a damage of a portion of the touch screen (108) of the mobile device (100), hanging of the mobile device (100) due to processing capabilities of the mobile device (100) and a portion of the touch screen (108) of the mobile device (100) submerged under water.
Based on detecting the screen block event associated with the touch screen (108) of the mobile device (100), the screen block event controller (118) automatically configures the mobile device (100) in a screen block mode using a machine learning model. The screen block mode configures the sensor (114a, 114b, and 114e) associated with the mobile device (100) to enable operation of the mobile device (100) during the screen block event. In an embodiment, the screen block mode configures the sensor (114a, 114b, and 114e) placed at the front panel (110) and the sensor (114c, 114d, and 114f-114h) placed at the back panel (112) of the mobile device (100) to enable operation of the mobile device (100) during the screen block event.
Further, the screen block event controller (118) is configured to determine an orientation of the mobile device (100) and an angle of the mobile device (100). Further, the screen block event controller (118) is configured to determine whether the orientation of the mobile device (100) indicates a landscape mode or a portrait mode. In an example, the mobile device (100) is operated in the portrait mode depicted in the FIG. 4a-4e. In an example, the mobile device (100) is operated in the landscape mode depicted in the FIG. 7. In response to detecting the orientation of the mobile device (100) indicates the landscape mode, the screen block event controller (118) configures the sensor (114a-114b, and 114e) placed at the front panel (110) of the mobile device (100) based on the orientation of the mobile device (100) and the angle of the mobile device (100). In response to detecting the orientation of the mobile device (100) indicates the portrait mode, the screen block event controller (118) configures the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) based on the orientation of the mobile device (100) and the angle of the mobile device (100).
Further, the screen block event controller (118) is configured to receive a first input on the configured sensor (114a,114b, and 114e) placed at the front panel (110) and the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) when the event is detected. The first input can be, for example, but not limited to a tap input, a long press input, and a swipe input. Based on the first input, the screen block event controller (118) is configured to identify the application to be launched. The application can be, for example, but not limited to a camera application, a social networking application, a finance application, a chat application, and a health related application.
Various different ways can be adapted to identify the application to be launched. The screen block event controller (118) is configured to identify the application to be launched based on an exposure value of the sensor (114a-114h) and a lux value of the sensor (114a-114h).
The screen block event controller (118) is configured to identify the application to be launched based on a probabilistic classification. The probabilistic classification is used to provide probability to each identifier of the sensor (114a-114h) corresponding to a features extracted from exposure values of all the sensors (114a-114h) provided in the back panel (112) or the front panel (110) of the mobile device (100) using the equation (1) and equation (2)
<equation (1)>
Figure PCTKR2021003990-appb-img-000001
<equation (2)>
Figure PCTKR2021003990-appb-img-000002
where P(C(a)|X) represents the probability of camera identifier (ID) given the values of the exposures, P(C(a)) represents the probability of the camera ID to be selected, P(xi |C(a)) represents the probability of the features when certain camera ID is being selected, and X = x1,x2,x3.... represents the features mentioned.
The screen block event controller (118) is configured to identify the at least one application to be launched based on below equations (3-5).
<equation (3)>
Figure PCTKR2021003990-appb-img-000003
<equation (4)>
Figure PCTKR2021003990-appb-img-000004
<equation (5)>
Figure PCTKR2021003990-appb-img-000005
where l is log-odds, p is the probability, b is exponential (e), Xi are the independent variables, Bi are the weights of the model, X1 -> lux with no action, X2 -> lux with action, X3 -> Difference between X2 and X1. The equations (3-5) is used to convert a regression value whose range can be from minus infinity to plus infinity. By using log of odds, the mobile device (100) have changed its range to [0,1] so that the mobile device (100) can easily use a threshold to further classify.
Further, the screen block event controller (118) is configured to determine motion data from the first input performed on the configured sensor (114a-114h) and map the motion data to the portion of the touch screen (108) of the mobile device (100). In an example, the user of the electronic device (100) provides long press on the front sensor (114a), the front sensor (114a) will detects the long press action from the user and accordingly map the portion of the touch screen (108). Further, the screen block event controller (118) is configured to display an indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping.
The motion data includes a direction of the first input performed on the configured sensor (114a-114h), a distance covered by the first input performed on the configured sensor (114a-114h), an orientation of the mobile device (100), a rate of motion of the first input performed on the configured sensor (114a-114h), and time unit of the first input performed on the at least one configured sensor (114a-114h). In an example, the motional data is determined by a right to left direction of the user input performed on the sensor (114a-114h) of the front panel (112), the distance covered by the first input performed on the sensor (114a), the landscape mode of the mobile device (100), and speed of the user input performed on the sensor (114a) of the front panel (112).
The indicator dynamically moves to navigate across the applications of the mobile device (100) based on the motion data extracted from the first input performed on the at least one configured sensor (114a-114h).
In an example, the indicator dynamically moves to navigate across the applications of the mobile device (100) based on the below equations:
<equation (6)>
Figure PCTKR2021003990-appb-img-000006
<equation (7)>
Figure PCTKR2021003990-appb-img-000007
where W is Weights/coefficients of each parameter X and B is the bias.
where W is a parametric vector used to predict the speed as accurately as possible and the equation (7) is the loss function used to train the weight parameters to the best it can so that this loss is minimum.
The application on the touch screen (108) of the mobile device (100) is determined based on a frequently used application, a preconfigured application, a recently used application, a location of the user and a type of the screen block event. In an example, if the user of the mobile device (100) installed five type of social networking applications in the mobile device (100), but the user is using a first type of the social networking application among the five type of the applications then, the frequently used application is the first type of the social networking application. In an example, the preconfigured application is already installed application in the mobile device during an original equipment manufacturer (OEM).
Further, the screen block event controller (118) is configured to receive a second input on the configured sensor (114a-114h) and launch the identified application in the mobile device (100) based on the second input. The second input can be, for example, but not limited to a tap input, a long press input, and a swipe input.
In an example, as shown in the FIGS.4a-4e, consider a scenario in which the mobile device (100) is in an underwater environment (300). The senor (114a) is configured to be used for detecting a touch input on the front sensor (114a) as shown in the FIG. 4a. Based on the touch input, the mobile device (100) selects a first application (i.e., gallery application) as shown in the FIG. 4b. Further, the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the front camera (114b). Here, the direction is a horizontal direction. Based on the touch input received on the front camera (114b), the mobile device (100) selects the second application (i.e., camera application) as shown in the FIG. 4c and FIG. 4d. After selecting the second application, the mobile device (100) revives the long press input on the front camera (114b), the electronic device opens the camera to capture the picture based on the long press input as shown in the FIG. 4e.
In an example, as shown in the FIGS.5a-5f, consider a scenario in which the mobile device (100) is operated in the underwater environment (300). The back senor (114c) is configured to be used for detecting the touch input on the back sensor (114c) placed on the back panel (112) as shown in the FIG. 5a. Based on the touch input, the mobile device (100) selects a first application (i.e., camera application) as shown in the FIG. 5b. Further, the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the back camera (114g). Here, the direction is a vertical direction. Based on the touch input received on the sensor (114g), the mobile device (100) selects the second application (i.e., video calling application) as shown in the FIGS. 5c and 5d. After selecting the second application, the mobile device (100) revives the tap input on the sensor (114h), the mobile device (100) opens the video calling application to establish the video call with other user based on the tap press input as shown in the FIGS. 5e and 5f.
In another example, in the underwater environment (300), the user of the mobile device (100) can operate a social networking application. When the mobile device (100) is presence in the underwater environment (300), the user of the mobile device (100) can control device functionality e.g., changing camera mode, switching between front and back camera etc.
In an example, as shown in the FIGS.6a-6f, consider a scenario in which the mobile device (100) is operated in the oil environment (600). The back senor (114c) is configured to be used for detecting the touch input on the back sensor (114c) placed on the back panel (112) as shown in the FIG. 6a. Based on the touch input, the mobile device (100) selects a first application (i.e., news application) as shown in the FIG. 6b. Further, the user of the mobile device (100) moves finger in a certain direction and provided the touch input on the back camera (114d). Here, the direction is the vertical direction. Based on the touch input received on the sensor (114d), the mobile device (100) selects the second application (i.e., calling application) as shown in the FIGS. 6c and 6d. After selecting the second application, the mobile device (100) receives the tap input on the sensor (114d), the mobile device (100) opens the calling application to establish the call with other user based on the tap press input as shown in the FIGS. 6e and 6f.
As shown in the FIG. 68 in the underwater environment (300), the user of the mobile phone (100) is operated under a chemical refinery environment (100) to perform a video call to communicate with other users as well without hampering user experience.
As shown in the FIG. 9, the mobile phone (100) is operated under the chemical refinery environment (800) to capture the photo to show to other users, according to an embodiment as disclosed herein;
As shown in the FIGS. 10a-10c, when the mobile device (100) gets hanged and the user is not able to perform any action e.g. answer, reject / reject with message then in that case action can be performed with combination of sensors (114a-114b) using the proposed methods.
As shown in the FIGS. 11a-11b, the touch screen (108) of the mobile device (100) is broken due to which an user interface is not visible, then in that case it is not possible to perform functionality e.g. answer the call, reject the call, then in such scenario's functionality can be performed using combination of sensors (114a-114b) using the proposed methods.
The processor (102) is configured to execute instructions stored in the memory (106) and to perform various processes. The communicator (104) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
The memory (106) stores instructions to be executed by the processor (102). The memory (106) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (106) may, in some examples, be considered a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted that the memory (106) is non-movable. In some examples, the memory (106) can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
Further, at least one of the plurality of hardware components may be implemented through an artificial intelligent (AI) model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor (102). The processor (102) may include one or a plurality of processors. At this time, one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
The one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
Here, being provided through learning means that, by applying a learning algorithm to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself in which AI according to an embodiment is performed, and/o may be implemented through a separate server/system.
The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
The learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
Although the FIG. 1 shows various hardware components of the mobile device (100) but it is to be understood that other embodiments are not limited thereon. In other embodiments, the mobile device (100) may include less or more number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function to handle the screen block event associated with the touch screen (108) of the mobile device (100).
FIG. 2 is a flow diagram (S200) illustrating a method for operating the mobile device (100) during the screen block event associated with the touch screen (108) of the mobile device (100), according to an embodiment as disclosed herein. The operations (S202-S226) are performed by the screen block event controller (118). At S202, the method includes detecting the screen block event associated with the touch screen (108) of the mobile device (100). At S204, the method includes automatically configuring the mobile device (100) in the screen block mode, when the screen block event is detected.
At S206, the method includes determining the orientation of the mobile device (100) and the angle of the mobile device (100). At S208, the method includes determining whether the orientation of the mobile device (100) indicates the landscape mode or the portrait mode. In response to detecting the orientation of the mobile device (100) indicates the landscape mode then, at S210, the method includes configuring the sensor (114a, 114b and 114e) placed at the front panel (110) of the mobile device (100) based on the orientation of the mobile device (100). In response to detecting the orientation of the mobile device (100) indicates the portrait mode then, at S212, the method includes configuring the sensor (114c, 114d, 114f-114h) placed at the back panel (112) of the mobile device (100) based on the orientation of the mobile device (100).
At S214, the method includes receiving the first input on the configured sensor (114) placed at the front panel (110) of the mobile device (100). At S216, the method includes receiving the first input on the configured sensor (114c or 114d) placed at the back panel (112) of the mobile device (100)
At S218, the method includes determining motion data from the first input performed on the configured sensor (114a-114h). At S220, the method includes mapping the motion data to the portion of the touch screen (108) of the mobile device (100). At S222, the method includes detecting the indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping. a change in lux value and an exposure value is greater than a predefined threshold. The predefined threshold is configured by the user or the OEM.
At S224, the method includes displaying the indicator indicating the application on the touch screen (108) of the mobile device (100) based the mapping.
At S226, the method includes receiving the second input on the configured sensor (114a-114h). S228, the method includes launching the at least one identified application in the mobile device (100) based on the second input.
The various actions, acts, blocks, steps, or the like in the flow diagrams (S200) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

Claims (15)

  1. A mobile device (100) for handling a screen block event associated with a touch screen (108) of the mobile device (100), comprising:
    a memory (106);
    a processor (102), coupled with the memory (106); and
    a screen block event controller (118), coupled with the memory (106) and the processor (102), configured to:
    detect the screen block event associated with the touch screen (108) of the mobile device (100);
    automatically configure the mobile device (100) in a screen block mode using a machine learning model, when the event is detected, wherein the screen block mode configures at least one sensor (114) associated with the mobile device (100) to enable operation of the mobile device (100) during the screen block event;
    receive a first input on the at least one configured sensor (114) associated with the mobile device (100) when the event is detected;
    identify at least one application to be perform an action on the at least one application based on the first input;
    receive a second input on the at least one configured sensor (114); and
    perform an action on the at least one identified application in the mobile device (100) based on the second input.
  2. The mobile device (100) as claimed in claim 1, wherein the action comprises one of:
    launch the at least one identified application in the mobile device (100) based on the second input, and
    activate the at least one identified application in the mobile device (100) based on the second input.
  3. The mobile device (100) as claimed in claim 1, wherein configure the at least one sensor (114) associated with the mobile device (100) to enable the operation of the mobile device (100) during the screen block event comprising:
    determine at least one of an orientation of the mobile device (100);
    determine whether the orientation of the mobile device (100) indicates one of a landscape mode and a portrait mode; and
    perform at least one of:
    configure the at least one sensor (114) placed at a front panel (110) of the mobile device (100) based on at least one of the orientation of the mobile device (100) in response to detecting the orientation of the mobile device (100) indicates the landscape mode, and
    configure the at least one sensor (114) placed at a back panel (112) of the mobile device (100) based on at least one of the orientation of the mobile device (100) in response to detecting the orientation of the mobile device (100) indicates the portrait mode.
  4. The mobile device (100) as claimed in claim 1, wherein identify the at least one application to be perform the action based on the first input comprises:
    determine motion data from the first input performed on the at least one configured sensor (114);
    map the motion data to at least one portion of the touch screen (108) of the mobile device (100); and
    display an indicator indicating the at least one application on the touch screen (108) of the mobile device (100) based the mapping.
  5. The mobile device (100) as claimed in claim 4, wherein the motion data comprises at least one of a direction of the first input performed on the at least one configured sensor (114), a distance covered by the first input performed on the at least one configured sensor (114), an orientation of the mobile device (100), a rate of motion of the first input performed on the at least one configured sensor (114), and time unit of the first input performed on the at least one configured sensor (114).
  6. The mobile device (100) as claimed in claim 4, wherein the indicator dynamically moves to navigate across the at least one application of the mobile device (100) based on the motion data extracted from the first input performed on the at least one configured sensor (114).
  7. The mobile device (100) as claimed in claim 1, wherein the at least one application on the touch screen (108) of the mobile device (100) is determined based on at least one a frequently used application, a preconfigured application, a recently used application, a location of the user and a type of the screen block event.
  8. The mobile device (100) as claimed in claim 1, wherein the screen block event is caused due to at least one of water droplets on the touch screen (108) of the mobile device (100), a damage of at least one portion of the touch screen (108) of the mobile device (100), hanging of the mobile device (100), and at least one portion of the touch screen (108) of the mobile device (100) submerged under water.
  9. The mobile device (100) as claimed in claim 1, wherein automatically configure the mobile device (100) in the screen block mode using the machine learning model comprises:
    learn the user usage pattern of the mobile device (100) over a period of time using the machine learning model (190); and
    automatically configure the mobile device (100) in the screen block mode based on the learned user usage pattern of the mobile device (100).
  10. A method for operating a mobile device (100) during a screen block event associated with a touch screen (108) of the mobile device (100), comprising:
    detecting, by the mobile device (100), the screen block event associated with the touch screen (108) of the mobile device (100);
    automatically configuring, by the mobile device (100), the mobile device (100) in a screen block mode using a machine learning model, when the event is detected, wherein the screen block mode configures at least one sensor (114) associated with the mobile device (100) to enable operation of the mobile device (100) during the screen block event;
    receiving, by the mobile device (100), a first input on the at least one configured sensor (114) associated with the mobile device (100) when the event is detected;
    identifying, by the mobile device (100), at least one application to be perform an action on the at least one application based on the first input based on a user usage pattern;
    receiving, by the mobile device (100), a second input on the at least one configured sensor (114); and
    performing, by the mobile device (100), the action on the at least one identified application in the mobile device (100) based on the second input.
  11. The method as claimed in claim 10, wherein the action comprises one of:
    launching the at least one identified application in the mobile device (100) based on the second input, and
    activating the at least one identified application in the mobile device (100) based on the second input.
  12. The method as claimed in claim 10, wherein configuring, by the mobile device (100), the at least one sensor (114) associated with the mobile device (100) to enable the operation of the mobile device (100) during the screen block event comprising:
    determining, by the mobile device (100), at least one of an orientation of the mobile device (100);
    determining, by the mobile device, whether the orientation of the mobile device (100) indicates one of a landscape mode and a portrait mode; and
    performing, by the mobile device (100), at least one of:
    configuring the at least one sensor (114) placed at a front panel (110) of the mobile device (100) based on at least one of the orientation of the mobile device (100) in response to detecting the orientation of the mobile device (100) indicates the landscape mode, and
    configuring the at least one sensor (114) placed at a back panel (112) of the mobile device (100) based on at least one of the orientation of the mobile device (100) in response to detecting the orientation of the mobile device (100) indicates the portrait mode.
  13. The method as claimed in claim 10, wherein identifying, by the mobile device (100), the at least one application to be perform the action on the at least one application based on the first input comprises:
    determining, by the mobile device (100), motion data from the first input performed on the at least one configured sensor (114);
    mapping, by the mobile device, the motion data to at least one portion of the touch screen (108) of the mobile device (100); and
    displaying, by the mobile device (100), an indicator indicating the at least one application on the touch screen (108) of the mobile device (100) based the mapping.
  14. The method as claimed in claim 13, wherein the motion data comprises at least one of a direction of the first input performed on the at least one configured sensor (114), a distance covered by the first input performed on the at least one configured sensor (114), an orientation of the mobile device (100), a rate of motion of the first input performed on the at least one configured sensor (114), and time unit of the first input performed on the at least one configured sensor (114).
  15. The method as claimed in claim 13, wherein the indicator dynamically moves to navigate across the at least one application of the mobile device (100) based on the motion data extracted from the first input performed on the at least one configured sensor (114).
PCT/KR2021/003990 2020-07-23 2021-03-31 Mobile device operation during screen block event of mobile device and method thereof WO2022019436A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041031598 2020-07-23
IN202041031598 2020-07-23

Publications (1)

Publication Number Publication Date
WO2022019436A1 true WO2022019436A1 (en) 2022-01-27

Family

ID=79729220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/003990 WO2022019436A1 (en) 2020-07-23 2021-03-31 Mobile device operation during screen block event of mobile device and method thereof

Country Status (1)

Country Link
WO (1) WO2022019436A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062069A1 (en) * 2013-09-04 2015-03-05 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20160156837A1 (en) * 2013-12-24 2016-06-02 Sony Corporation Alternative camera function control
US20170003879A1 (en) * 2014-12-25 2017-01-05 Kyocera Corporation Portable terminal and control method
KR20180047694A (en) * 2016-11-01 2018-05-10 엘지전자 주식회사 Mobile terminal
US20190079778A1 (en) * 2017-09-12 2019-03-14 Facebook, Inc. Systems and methods for automatically changing application start state based on device orientation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062069A1 (en) * 2013-09-04 2015-03-05 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20160156837A1 (en) * 2013-12-24 2016-06-02 Sony Corporation Alternative camera function control
US20170003879A1 (en) * 2014-12-25 2017-01-05 Kyocera Corporation Portable terminal and control method
KR20180047694A (en) * 2016-11-01 2018-05-10 엘지전자 주식회사 Mobile terminal
US20190079778A1 (en) * 2017-09-12 2019-03-14 Facebook, Inc. Systems and methods for automatically changing application start state based on device orientation

Similar Documents

Publication Publication Date Title
CN109829433B (en) Face image recognition method and device, electronic equipment and storage medium
US8918543B2 (en) Changing device functionality using environment conditions
WO2017135797A2 (en) Method and electronic device for managing operation of applications
WO2012102519A2 (en) Terminal having touch screen and method for identifying touch event therein
WO2015108300A1 (en) Frame rate control method and electronic device thereof
EP3507734A1 (en) Method and electronic device for providing multi-level security
WO2018182057A1 (en) Method and system for providing notification for to-do list of user
CN107729143B (en) Application control method and device, storage medium and electronic equipment
CN108737739A (en) A kind of preview screen acquisition method, preview screen harvester and electronic equipment
WO2015102126A1 (en) Method and system for managing electronic album using face recognition technology
EP3776469A1 (en) System and method for 3d association of detected objects
WO2019062341A1 (en) Background application cleaning method and apparatus, and storage medium and electronic device
WO2022019436A1 (en) Mobile device operation during screen block event of mobile device and method thereof
WO2020045925A1 (en) Methods and systems for managing an electronic device
WO2022039366A1 (en) Electronic device and control method thereof
WO2018139878A1 (en) Method and electronic device for managing operations and functionality of applications
WO2021040296A1 (en) Method for determining proximity of at least one object using electronic device
WO2013115493A1 (en) Method and apparatus for managing an application in a mobile electronic device
WO2022244948A1 (en) System and method of controlling brightness on digital displays for optimum visibility and power consumption
WO2022035058A1 (en) Method and system of dnn modularization for optimal loading
WO2019172463A1 (en) Method, system, and non-transitory computer-readable recording medium for recommending profile photo
WO2015108310A1 (en) User interface for touch devices
EP3975111A1 (en) Object detection device, object detection system, object detection method, program, and recording medium
WO2016208817A1 (en) Apparatus and method for interfacing key input
WO2020171574A1 (en) System and method for ai enhanced shutter button user interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845480

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21845480

Country of ref document: EP

Kind code of ref document: A1