CN111176448A - Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium - Google Patents

Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium Download PDF

Info

Publication number
CN111176448A
CN111176448A CN201911368129.7A CN201911368129A CN111176448A CN 111176448 A CN111176448 A CN 111176448A CN 201911368129 A CN201911368129 A CN 201911368129A CN 111176448 A CN111176448 A CN 111176448A
Authority
CN
China
Prior art keywords
time
user
time setting
head
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911368129.7A
Other languages
Chinese (zh)
Inventor
贺思颖
刘杉
李松南
陈家君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911368129.7A priority Critical patent/CN111176448A/en
Publication of CN111176448A publication Critical patent/CN111176448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention discloses a method and a device for realizing time setting in a non-touch manner, electronic equipment and a storage medium, wherein the method for realizing time setting in the non-touch manner comprises the following steps: displaying a time setting interface, wherein the time displayed in the time setting interface is initial time; acquiring an image frame sequence comprising a plurality of image frames, wherein the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed; performing pose estimation processing on a plurality of image frames in the image frame sequence, and capturing an action performed by a user for time setting when the time setting interface is displayed; setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time; and displaying the time in the time setting interface, wherein the time is changed from the initial time to the updating time. The invention solves the problem that the time setting depends on complicated manual operation in the prior art.

Description

Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for realizing time setting in a non-touch mode, electronic equipment and a storage medium.
Background
Various clock applications are flooded in people's daily lives, such as timers, alarm clocks, etc.
Currently, various clock applications rely mostly on manual operations for time setting. Taking a timer as an example, a user is usually required to click and enter a time setting interface, then drag a number in a designated area in the time setting interface to complete the time setting of the timer, and finally click "start timing" to start the countdown function of the timer.
The inventor herein has realized that conventional time-sets, which rely on manual operations that are too cumbersome, can easily give the user a poor user experience.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, an electronic device, and a storage medium for implementing time setting without touch, so as to solve the problem that time setting depends on complicated manual operation in the related art.
The technical scheme adopted by the invention is as follows:
according to one aspect of the invention, a method for non-touch implementation of time setting comprises: displaying a time setting interface, wherein the time displayed in the time setting interface is initial time; acquiring an image frame sequence comprising a plurality of image frames, wherein the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed; performing pose estimation processing on a plurality of image frames in the image frame sequence, and capturing an action performed by a user for time setting when the time setting interface is displayed; setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time; and displaying the time in the time setting interface, wherein the time is changed from the initial time to the updating time.
According to one aspect of the invention, an apparatus for implementing time setting without touch is characterized by comprising: the interface display module is used for displaying a time setting interface, and the time displayed in the time setting interface is initial time; the image acquisition module is used for acquiring an image frame sequence containing a plurality of image frames, and the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed; the motion capture module is used for carrying out posture estimation processing on a plurality of image frames in the image frame sequence and capturing the motion executed by a user for time setting when the time setting interface is displayed; the time setting module is used for setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time; and the time display module is used for displaying that the time is changed from the initial time to the updating time in the time setting interface.
In one embodiment, the motion capture module comprises: the posture estimation unit is used for carrying out head posture estimation processing on each image frame aiming at a plurality of image frames in the image frame sequence and determining the posture of the head of the user in each image frame; and the head action capturing unit is used for capturing short-time head actions performed by the user for time setting when the time setting interface is displayed according to the postures of the head of the user in the image frames.
In one embodiment, the attitude estimation unit includes: a key point identification subunit, configured to identify a key point in the image frame corresponding to the head of the user, to obtain a position of the key point in the image frame of the head of the user; a pose determination subunit, configured to determine a pose of the head of the user in the image frame from the key point positions of the head of the user in the image frame.
In one embodiment, the head motion capture unit includes: the offset detection subunit is used for detecting whether the head of the user has offset along the gravity direction according to the positions of the key points of the head of the user in each image frame; and the head action determining subunit is used for determining the short-time head action executed by the user for time setting when the time setting interface is displayed according to the detection result.
In one embodiment, the time setting module includes: and the time adjusting unit is used for adjusting the initial time according to an adjusting mode corresponding to the captured action to obtain the updating time.
In one embodiment, the time adjustment unit includes: a state value determining subunit, configured to determine an action state value corresponding to the captured action; and the time adjusting subunit is configured to adjust the initial time according to an adjusting manner indicated by the action state value, so as to obtain the update time.
In one embodiment, the time adjustment subunit includes: the incremental value acquisition subunit is used for acquiring an incremental time value when the adjustment mode indicated by the action state value is time increment; the accumulation subunit is configured to accumulate the initial time and the incremental time value to obtain a first intermediate time; a first updating subunit, configured to take the first intermediate time as the updating time if the first intermediate time does not exceed a maximum time value; and the second updating subunit is used for taking the maximum time value as the updating time.
In one embodiment, the time adjustment subunit includes: a decreasing value obtaining subunit, configured to obtain a decreasing time value when the adjustment mode indicated by the action state value is time decreasing; the difference calculating subunit is used for calculating the difference between the initial time and the decreasing time value to obtain a second intermediate time; a third updating subunit, configured to take the minimum time value as the updating time if the second intermediate time is less than the minimum time value; and the fourth updating subunit is used for taking the second intermediate time as the updating time.
In one embodiment, the time adjustment subunit includes: the timing subunit is used for starting a time counter when the adjustment mode indicated by the action state value is time unadjustment; and the fifth updating subunit is used for taking the initial time as the updating time when the time value of the time counter exceeds a set threshold value.
In one embodiment, the time adjustment subunit further includes: and the countdown subunit is used for jumping from the time setting interface to a countdown interface when the time value of the time counter exceeds a set threshold value, and displaying countdown starting from the updating time in the countdown interface.
In one embodiment, the apparatus further comprises: and the time updating module is used for updating the updating time to the initial time and returning to execute the step of acquiring the image frame sequence comprising the plurality of image frames so as to realize the cycle setting process of the display time in the time setting interface.
According to one aspect of the invention, an electronic device includes a processor and a memory having computer-readable instructions stored thereon which, when executed by the processor, implement the method of non-touch enabled time setting as described above.
According to an aspect of the invention, a storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method of non-touch enabled time setting as described above.
In the technical scheme, after entering the time setting interface, the user only needs to execute the action aiming at the time setting, so that the non-touch time setting can be realized, the operation is convenient and fast, and the use experience of the user can be effectively improved.
Specifically, when a time setting interface is displayed, an image frame sequence acquired by a camera assembly is acquired, attitude estimation processing is performed on a plurality of image frames in the image frame sequence, so that an action executed by a user for time setting when the time setting interface is displayed is captured, then an update time is obtained by setting an initial time displayed in the time setting interface according to the captured action, and the display time in the time setting interface is changed from the initial time to the update time, that is, in the time setting process, the user does not need manual operation at all but replaces the action through execution, the update time set according to the action executed by the user can be displayed in the time setting interface, and the problem that the time setting in the prior art depends on complicated manual operation is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present invention.
Fig. 2 is a block diagram illustrating a hardware configuration of an electronic device according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of non-touch enabled time setting in accordance with an exemplary embodiment.
FIG. 4 is a flow diagram of one embodiment of steps 370/390 in the corresponding embodiment of FIG. 3.
FIG. 5 is a flowchart of step 371 in one embodiment in the corresponding embodiment of FIG. 4.
Fig. 6 is a schematic diagram of a specific implementation of the adjustment mode of the time setting in time increment according to the corresponding embodiment of fig. 5.
Fig. 7 is a schematic diagram of a specific implementation of the adjustment manner of the time setting in the decreasing time according to the embodiment in fig. 5.
Fig. 8 is a schematic diagram of a specific implementation of the time setting in which the adjustment mode is not time adjustment according to the embodiment of fig. 5.
FIG. 9 is a flow diagram for one embodiment of step 350 of the corresponding embodiment of FIG. 3.
FIG. 10 is a flow diagram of step 351 in one embodiment of the corresponding embodiment of FIG. 9.
Fig. 11 is a detailed schematic diagram of key points corresponding to the head of the user in the image frame according to the embodiment shown in fig. 10.
FIG. 12 is a flowchart of one embodiment of step 353 of the corresponding embodiment of FIG. 9.
Fig. 13 is a schematic diagram of a specific implementation of a method for implementing time setting without touch in an application scenario.
Fig. 14 is a diagram illustrating update times of the time setting interface related to the corresponding scene in fig. 13.
FIG. 15 is a schematic diagram of the countdown interface involved in the corresponding scenario of FIG. 13 showing the start of a countdown.
FIG. 16 is a block diagram illustrating an apparatus for non-touch enabled time setting according to an example embodiment.
FIG. 17 is a block diagram illustrating an electronic device in accordance with an example embodiment.
While specific embodiments of the invention have been shown by way of example in the drawings and will be described in detail hereinafter, such drawings and description are not intended to limit the scope of the inventive concepts in any way, but rather to explain the inventive concepts to those skilled in the art by reference to the particular embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a schematic illustration of an implementation environment involved in a method for non-touch implementation of time setting. The implementation environment includes a user 100 and an electronic device 200.
Specifically, the electronic device 200 may deploy a device for implementing time setting without touch, thereby facilitating the user 100 to use various clock applications, such as a timer, an alarm clock, and the like. The electronic device 200 may be a desktop computer, a notebook computer, a tablet computer, a smart phone, a palm computer, and the like, which is not limited herein.
When the apparatus for implementing time setting without touch is operated in the electronic device 200, the clock application is loaded correspondingly to the time setting interface, and then the time setting phase is entered.
For the user 100, a corresponding action may be performed for the time setting when the time setting interface shows the time.
Along with the interaction between the user 100 and the electronic device 200, the electronic device 200 can capture the action executed by the user 100 for time setting when the time setting interface is displayed, and further perform time setting according to the captured action, and further display the time setting interface, so that non-touch time setting is realized, and the use experience of the user is effectively improved.
Fig. 2 is a block diagram illustrating a hardware configuration of an electronic device according to an example embodiment.
It should be noted that this server is only an example adapted to the present invention and should not be considered as providing any limitation to the scope of use of the present invention. Such a server should not be interpreted as having to rely on or have to have one or more components of the exemplary electronic device 200 shown in fig. 2.
The hardware structure of the electronic device 200 may have a large difference due to the difference of configuration or performance, as shown in fig. 2, the electronic device 200 includes: a power supply 210, an interface 230, at least one memory 250, at least one Central Processing Unit (CPU) 270, and a screen 290.
Specifically, the power supply 210 is used to provide operating voltages for various hardware devices on the electronic device 200.
The interface 230 includes at least one wired or wireless network interface for interacting with external devices. Of course, in other examples of the present invention, the interface 230 may further include at least one serial-to-parallel conversion interface 233, at least one input/output interface 235, at least one USB interface 237, etc., as shown in fig. 2, which is not limited herein.
The storage 250 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 251, an application 253, data 255, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 251 is used for managing and controlling hardware devices and application programs 253 on the electronic device 200 to implement operations and processing of the mass data 255 in the memory 250 by the central processing unit 270, and may be windows server, Mac OS XTM, unix, linux, FreeBSDTM, or the like.
The application 253 is a computer program that performs at least one specific task on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may contain a series of computer-readable instructions for the electronic device 200. For example, the apparatus for implementing time setting without touch may be regarded as the application 253 deployed in the electronic device 200.
The data 255 may be photographs, pictures, or the like stored in a magnetic disk, may be a sequence of image frames, or the like, and may be stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 through at least one communication bus to read computer-readable instructions stored in the memory 250, and further implement operations and processing of the mass data 255 in the memory 250. The method of non-touch enabled time setting is accomplished, for example, by central processor 270 reading a series of computer readable instructions stored in memory 250.
The screen 290, which may be a liquid crystal display or an electronic ink display, provides an output interface between the electronic device 200 and the user, so that the output content formed by any one or combination of text, pictures or videos is displayed to the user through the output interface 290. For example, a time setting interface is displayed to show an initial time in the time setting interface.
Furthermore, the present invention can be implemented by hardware circuits or by a combination of hardware circuits and software, and thus, the implementation of the present invention is not limited to any specific hardware circuits, software, or a combination of both.
Referring to fig. 3, in an exemplary embodiment, a method for implementing time setting without touch is applied to an electronic device in the implementation environment shown in fig. 1, and the structure of the electronic device may be as shown in fig. 2.
The method for realizing time setting without touch can be executed by the electronic equipment, and can also be understood as being executed by an application program (namely, a device for realizing time setting without touch) running in the electronic equipment. In the following method embodiments, for convenience of description, the main execution subject of each step is described as an electronic device, but the method is not limited thereto.
The method for realizing the time setting in a non-touch mode can comprise the following steps:
step 310, displaying a time setting interface.
First, it is explained that the time setting interface is adapted to set time for a corresponding clock application, and is loaded as the apparatus for implementing time setting without touch is operated in the electronic device, and then displayed in a screen configured in the electronic device.
The time setting interface may be suitable for clock applications such as a timer and an alarm clock, for example, setting a countdown starting time for the timer, setting a ring starting time for the alarm clock, a ring duration, a ring interval time, and the like. Therefore, the method for realizing time setting without touch provided by the embodiment can be applied to different application scenarios according to the application range of the time setting interface, including but not limited to: a countdown scenario, an alarm clock scenario, etc.
And secondly, the time displayed in the time setting interface is the initial time.
That is, after entering the time-setting interface, the time-setting interface displays a default time, i.e., the initial time. It will also be understood that subsequent time settings will begin at this initial time.
As illustrated in the alarm scenario, if the user wishes to set the ring start time, the initial time shown in the time setting interface is the local time after entering the time setting interface. For example, the user enters the time setting interface at the local time 10, at this time, the initial time shown in the time setting interface is 10, and accordingly, the subsequent time setting will start from 10.
It should be noted that once the time setting interface is entered and an initial time is displayed in the time setting interface, the initial time displayed in the time setting interface will not change with the change of the local time, and only change due to the corresponding setting of the subsequent actions performed by the user.
Step 330, an image frame sequence comprising a plurality of image frames is acquired.
Wherein a plurality of image frames in the image frame sequence are generated by shooting a user while the time setting interface is displayed.
It should be understood that the capturing may refer to continuous capturing or intermittent capturing, and then, in terms of the image frame sequence, the period displayed on the time setting interface may be a period of video generated by continuous capturing or a plurality of pictures generated by intermittent capturing, and accordingly, each image frame in the image frame sequence may correspond to one video frame in a period of video or one picture in a plurality of pictures, and the embodiment is not limited in this embodiment.
Further, the source of the image frames in the image frame sequence may be the image frames acquired by the camera assembly in real time, or the image frames acquired by the camera assembly in real time and pre-stored in the memory of the electronic device. In other words, when the electronic device acquires the image frame sequence, the electronic device may acquire a plurality of image frames acquired by the camera assembly in real time, or may read a plurality of image frames acquired by the camera assembly in a historical time period from the memory, which is not limited in this embodiment.
It should be noted that the camera module may be embedded in an electronic device, for example, a camera in a smart phone, and may also be independent of the electronic device, for example, cameras, etc. disposed around a user, so as to capture a picture of the user.
At this time, for the camera module to be independent from the electronic device, a network connection is pre-established between the electronic device and the camera module, so as to implement data transmission between the electronic device and the camera module through the established network connection, where the transmitted data includes, but is not limited to: image frame sequences, data transmission instructions, etc. For example, when the time setting interface is displayed, the electronic device actively issues a data transmission instruction to the camera assembly, so that the camera assembly uploads a sequence of image frames in response to the data transmission instruction.
And step 350, performing posture estimation processing on a plurality of image frames in the image frame sequence, and capturing actions performed by a user for time setting when the time setting interface is displayed.
As mentioned above, the inventor has realized that the manual operation of the time setting from entering the time setting interface to starting the countdown function of the timer is too cumbersome to be used by the user.
Therefore, in the embodiment, after the time setting stage is entered as the time setting interface shows the initial time, the user replaces the tedious manual operation by executing the action, so as to realize the non-touch time setting, and further effectively improve the use experience of the user.
Types of actions performed by the user include, but are not limited to: head movements, body movements, facial expression movements, and the like. For example, the user may perform time setting by head-lowering, head-raising, or the like instead of dragging a number within a designated area in the time setting interface, may perform manual operation instead of body movement such as jumping up, squatting down, or the like, or may perform time setting by facial expression movement such as blinking, mouth opening, or the like.
For the electronic device, various manual operations triggered by the user on the screen for the time setting are not intercepted, but actions performed by the user for the time setting when the time setting interface is displayed are captured.
Specifically, motion capture is performed by performing a pose estimation process on a plurality of image frames in a sequence of image frames.
As previously mentioned, the types of actions performed by the user include, but are not limited to: head movements, body movements, facial expression movements, and the like. Therefore, the pose estimation scheme provided by the embodiment may be adaptively adjusted according to the type of the action performed by the user, for example, the head pose estimation is suitable for head actions, the body pose estimation is suitable for body actions, the face recognition is suitable for facial expression actions, and the like.
After capturing the action executed by the user aiming at the time setting when the time setting interface is displayed, the time setting can be carried out according to the captured action subsequently, thereby achieving the purpose of realizing the time setting in a non-touch manner by the user.
Step 370, setting the initial time displayed in the time setting interface according to the captured action, and obtaining the updated time.
Wherein, the update time refers to the time after the initial time displayed in the time setting interface is set according to the captured action.
Here, it may be set that the initial time is directly adjusted to the update time according to the captured motion, i.e. the captured motion is considered to indicate the time actually required by the user. For example, the initial time is 10 points, the captured motion indicates that the time actually required by the user is 11 points, and the update time is set to 11 points.
In addition, the inventors have also found that the user may change his mind during the time setting or the user's needs may not be satisfied by only one time setting. For example, one time setting can only set the initial time 10 to the updated time 10 and 15 minutes, and in this case, in order to satisfy the ringing start time that the user wishes to set to 11, it is necessary to perform a plurality of time settings.
That is, it may be set that the initial time is gradually adjusted according to the captured motion, i.e. the captured motion is considered to indicate the time range that the user actually needs to adjust. For example, the initial time is still 10 points, and assuming that the captured motion indicates that the time range in which the user actually needs to adjust is +15 minutes, the update time is first adjusted from 10 points to 10 points and 15 minutes, and if the time actually needed by the user is 11 points, the update time needs to be further adjusted to 11 points by gradually increasing from 10 points and 15 minutes to 10 points and 30 minutes and 10 points and 45 minutes. Then, the time setting process at this time corresponds to a process of adjusting the time by one cycle.
Thus, in one implementation of an embodiment, the time setting is essentially a plurality of cycles of time adjustment, including a plurality of motion captures and a plurality of time settings.
Specifically, the update time is updated to the initial time, and the step 330 of obtaining the image frame sequence including the image frames is executed again, so as to perform the pose estimation process on the image frames in the image frame sequence, and perform the next time setting with the captured action based on the updated initial time, that is, implement the cycle setting process of the presentation time in the time setting interface.
In other words, after the time setting is completed once, as the update time is updated to the initial time, the subsequent time setting is actually performed based on the update time in the last time setting process, and then the purpose of meeting the user requirement is achieved through multiple cycles of time adjustment, namely, multiple action capture and multiple time setting.
Specifically, as shown in fig. 4, in an implementation of an embodiment, step 370 may include the steps of:
step 371, adjusting the initial time according to the adjustment mode corresponding to the captured motion, so as to obtain the updated time.
The adjustment mode, corresponding to the captured action, includes but is not limited to: time unadjusted, time incremented, time decremented, etc.
Still as illustrated in the foregoing alarm clock scenario, assume that the user enters the time setting interface at a local time of 10, and at this time, the initial time displayed in the time setting interface is 10. Based on the action performed by the user for the time setting while the time setting interface is displayed, the initial time 10 point will be set according to the captured action.
Assuming that the ringing start time desired to be set by the user is 11 points, when the time setting interface displays, a "head up" action is performed with respect to a time set from 10 points to 11 points, and if the adjustment manner corresponding to the "head up" action is time increment, the initial time 10 point is adjusted to the update time 11 point.
Step 390, displaying the time in the time setting interface, wherein the time is changed from the initial time to the updating time.
That is, the time displayed in the time setting interface will change accordingly as the initial time is captured to the setting of action progress, i.e. from the initial time to the update time, which is also considered to be displayed in the time setting interface as shown in step 391 in fig. 4.
The alarm clock scene is combined, and the time displayed in the time setting interface is changed from the initial time 10 point to the updated time 11 point, so that the completion of the time setting of the user is fed back.
Here, the presentation may be a direct presentation, that is, the initial time is replaced by the update time, and the update time is presented on the time setting interface; the time setting interface may be a time setting interface, and the time setting interface may be a time setting interface.
Through the process, the non-touch time setting is realized, namely, the user does not set time through excessively complicated manual operation any more, but replaces the time through the execution of actions, so that the problem that the time setting depends on the complicated manual operation in the prior art is solved, the clock is endowed with new vitality and vitality, and the user is endowed with brand-new use experience.
Referring to fig. 5, in an exemplary embodiment, step 371 may include the following steps:
at step 3711, an action state value corresponding to the captured action is determined.
Wherein the action state value is used to uniquely represent the captured action.
As previously mentioned, the types of actions performed by the user include, but are not limited to: head movements, body movements, facial expression movements, and the like. Furthermore, the head actions include head lowering, head raising and the like; the body actions comprise upward jumping, squatting and the like; facial expression movements also include blinking, opening the mouth, and the like.
Accordingly, different action state values indicate different actions performed by the user. For example, the operation state value corresponding to the "head-down" operation is "1", and the operation state value corresponding to the "head-up" operation is "2". Alternatively, the operation state value corresponding to the "jump up" operation is "a", and the operation state value corresponding to the "squat down" operation is "b". Further alternatively, the motion state value corresponding to the "blink" motion is "-", and the motion state value corresponding to the "open mouth" motion is "/", and so on.
In other words, for the electronic device, the captured motion is actually represented by any one or a combination of a series of characters, letters, and numbers, i.e., the motion state value. Then, as the action state value is determined, the action performed by the user for the time setting while the time setting interface is displayed may be accordingly known.
Step 3713, adjusting the initial time according to the adjustment mode indicated by the action state value to obtain the update time.
The adjusting mode is used for guiding the electronic equipment to set time. As previously described, the adjustment modes include time increment, time decrement, and time no adjustment.
It will be appreciated that the user no longer sets the time by manual operation, but is replaced by the performance of an action, and the user performs a different action, i.e. the manner of adjustment indicated to the user's desire is different.
At this time, the action state value actually indicates the adjustment mode desired by the user. For example, the action state value corresponding to the "head-down" action is "1", indicating that the adjustment mode desired by the user is time decrement; if the action state value corresponding to the head-up action is "2", it indicates that the adjustment mode desired by the user is time increment. Or the action state value corresponding to the 'jump up' action is 'a', and the adjustment mode desired by the user is indicated as time increment; and if the action state value corresponding to the squatting action is 'b', indicating that the adjustment mode expected by the user is time decrement. Alternatively, the action state value corresponding to the "blink" action is "-" indicating that the adjustment mode desired by the user is time-up, "the action state value corresponding to the" open mouth "action is"/", indicating that the adjustment mode desired by the user is time-down, and the like.
The following describes the procedure of time setting in different adjustment modes in detail with reference to fig. 6 to 8.
as shown in fig. 6, t0 represents an initial time, t2 represents a maximum time value, and the interval time △ t represents an incremental time value.
and when the adjustment mode indicated by the action state value is time increment, acquiring an increment time value deltat.
the initial time t0 is accumulated with the incremental time value △ t to obtain a first intermediate time t'.
If the first intermediate time t 'does not exceed the maximum time value t2, which indicates that the time desired to be set by the user has not reached the upper time limit, the first intermediate time t' is taken as the updating time t.
Otherwise, if the first intermediate time t' is greater than the maximum time value t2, indicating that the time desired to be set by the user has reached the upper time limit, the maximum time value t2 is taken as the update time t. Thereby, a time setting is achieved in which the adjustment is time-incremental.
as shown in fig. 7, t0 represents an initial time, t1 represents a minimum time value, and the interval time △ t represents a decreasing time value.
and when the adjustment mode indicated by the action state value is time decrement, acquiring a decrement time value deltat.
calculating the difference between the initial time t0 and the decrement time value Δ t to obtain a second intermediate time t'.
If the second intermediate time t' is less than the minimum time value t1, indicating that the time desired to be set by the user has reached the lower time limit, the minimum time value t1 is taken as the update time t.
Otherwise, if the second intermediate time t 'does not exceed the minimum time value t1, which indicates that the time desired to be set by the user has not reached the lower time limit, the second intermediate time t' is taken as the updating time t.
Thereby, a time setting is achieved in which the adjustment is a time decrement.
Of course, in other embodiments, when the time desired to be set by the user has reached the upper time limit, the minimum time value t1 may be used as the update time t, or when the time desired to be set by the user has reached the lower time limit, the maximum time value t2 may be used as the update time t, which means that the user has undergone a complete cycle of the adjustment of the time this time, and this embodiment is not limited in this embodiment.
As shown in fig. 8, T0 represents the initial time, T represents the set threshold, and T "represents the time value of the time counter.
And when the adjustment mode indicated by the action state value is time unadjustment, starting a time counter.
If the time value T "of the time counter exceeds a set threshold T, indicating that the user desires no time setting any more, the initial time T0 is taken as the updated time T.
Otherwise, if the time value T "of the time counter does not exceed the set threshold T, indicating that the user may still desire to make a time setting, the waiting continues.
Thus, the time setting that the adjusting mode is not time adjusting is realized.
Further, after the user desires to set the time no longer, i.e., the time setting is completed, different functions of various clock applications are turned on accordingly.
For example, in a countdown scenario, the countdown function starts, i.e., jumps from the time setting interface to a countdown interface in which the countdown from the update time is presented.
Or in the alarm clock scene, the alarm clock function is started, namely, the time setting interface is jumped to the alarm clock storage interface, and the user is prompted to store the created alarm clock in the alarm clock storage interface.
Through the cooperation of the above embodiments, the time setting is performed according to different adjustment modes along with different actions performed by the user, so that the non-touch time setting is realized.
Referring to fig. 9, in an exemplary embodiment, step 350 may include the steps of:
step 351, performing head pose estimation processing on each image frame aiming at a plurality of image frames in the image frame sequence, and determining the pose of the head of the user in each image frame.
As described above, a plurality of image frames in the image frame sequence are generated by photographing the user while the time setting interface is displayed, and thus, the posture of the user's head in the image frames refers to the instantaneous head motion of the user while the time setting interface is displayed.
Based on this, performing the head pose estimation process for a plurality of image frames in the image frame sequence will obtain a plurality of instantaneous head movements performed by the user while the time setting interface is displayed. Then, by chronologically continuing the plurality of instantaneous head movements, a short-time head movement performed by the user while the time-setting interface is displayed can be obtained.
And 353, capturing short-time head actions performed by the user for time setting when the time setting interface is displayed according to the posture of the head of the user in each image frame.
It will be appreciated that the short-time head action performed by the user while the time-setting interface is displayed may also be some other unrelated action, such as a short-time head action like scratching the head, which may result in the electronic device subsequently making an incorrect time setting.
For this reason, in the present embodiment, the captured short-time head motion is made by the user with respect to the time setting while the time setting interface is displayed, so as to sufficiently ensure the accuracy of the time setting.
The capturing process of the short-time head motion performed by the user for the time setting at the time of the time setting interface display will be described in detail below with reference to fig. 10 to 12.
Referring to FIG. 10, in an exemplary embodiment, step 351 may include the following steps:
step 3511, identifying the key points in the image frame corresponding to the head of the user, and obtaining the key point positions of the head of the user in the image frame.
It is understood that the head of the user has a corresponding head contour in the image frame, and the head contour is formed by a series of pixel points in the image frame, so that a key pixel point in the series of pixel points is regarded as a key point in the image frame corresponding to the head of the user.
Taking the face in the head of the user as an example, the face structure includes various categories of eyebrows, eyes, nose, mouth, ears, and chin, and accordingly, the key points may also have different categories according to various categories of the face structure, for example, eye category key points, mouth category key points, and so on.
As shown in fig. 11, the image frame has 68 key points corresponding to a human face, and may specifically include, according to different categories: 6 eye category key points 43-48 for eyes in the image frame, 20 mouth category key points 49-68 for mouth in the image frame, and so on.
Based on this, in the present embodiment, the key point position is substantially a coordinate (x, y), that is, the key point of the head of the user in the image frame is uniquely represented by a coordinate manner. Similarly, the category of the keypoint location will also vary with the variation of the keypoint category, for example, keypoint locations classified according to different categories include: keypoint locations corresponding to eye category keypoints, keypoint locations corresponding to mouth category keypoints, and so on.
Optionally, the key point identification is implemented based on a deep learning model, that is, an image frame is input into the deep learning model, so that the position of the key point in the image frame corresponding to the head of the user can be extracted.
That is, the deep learning model essentially constructs a mathematical mapping relationship between the image frame and the key point positions, and then, based on the mathematical mapping relationship, the key point positions of the user's head in the image frame can be obtained from the image frame.
It is noted that the deep learning model is generated by training a base model through a large number of image frames labeled with key point positions, where the base model includes, but is not limited to, a neural network model, a convolutional network model, a residual network model, and the like, and this embodiment does not limit this.
Step 3513, determining the pose of the user's head in the image frame from the keypoint locations of the user's head in the image frame.
As can be seen from the above, the pose of the user's head in the image frame will be represented as a sequence of keypoint locations, i.e. constituted by the keypoint locations of the user's head in one or more categories in the image frame. For example, the key point position sequence is { eyebrow key point position, eye key point position, nose key point position, mouth key point position, ear key point position, chin key point position }, or the key point position sequence is { nose key point position }.
The process of capturing the short-time head action performed by the user for the time setting while the time setting interface is displayed will be further described below in connection with the keypoint locations.
Referring to FIG. 12, in an exemplary embodiment, step 353 may include the steps of:
step 3531, detecting whether the head of the user is shifted in the direction of gravity according to the positions of the key points of the head of the user in each image frame.
Regardless of whether the camera module is embedded in the electronic device or is disposed around the user independently of the electronic device, the inventor has recognized that the field of view of the camera module is relatively fixed with respect to the electronic device, and the key points corresponding to the head of the user in each image frame are also relatively fixed, and similarly, the key points of the head of the user in each image frame are also relatively fixed.
Then, once the user's head has changed at the key point locations in each image frame, it indicates that the user may have performed a short-term head action such as "head down" or "head up".
Furthermore, the inventor has found that a short head action such as "head-down" or "head-up" means that the head of the user is displaced downward or upward in the direction of gravity.
Therefore, in this embodiment, the essence of the motion capture process is to determine whether the change in the position of the key point corresponds to a downward or upward shift of the head of the user along the direction of gravity.
It can also be understood that the difference of the short-time head movements performed by the user causes the change of the position of the key point of the head of the user in each image frame to be different, so that the detection result is correspondingly different, that is, the detection result is used for indicating that the change of the position of the key point corresponds to which type of position deviation of the head of the user along the gravity direction.
For example, if the user performs a "head-down" action, the positions of the key points of the user's head in each image frame will be shifted downward in the vertical direction, and the detection result indicates that the positions of the key points are changed in accordance with the downward position shift of the user's head in the gravity direction.
If the user performs the head-up action and the positions of the key points of the head of the user in each image frame are shifted upwards along the vertical direction, the detection result indicates that the positions of the key points are changed according to the situation that the head of the user is shifted upwards along the gravity direction.
Of course, in order to sufficiently guarantee the accuracy of motion capture, an offset threshold may be set, and if the position offset does not exceed the offset threshold, it is considered that the user has not performed the temporal head motion with respect to the time setting, that is, it is considered that capturing the temporal head performed by the user with respect to the time setting while the time setting interface is displayed has failed, and then the step 330 is returned to be performed, and the image frame sequence including the plurality of image frames is continuously acquired.
The offset threshold may be flexibly adjusted according to the actual needs of the application scenario, and the embodiment is not limited herein.
Step 3533, determining a short-term head action performed by the user for time setting when the time setting interface is displayed according to the detection result.
From the above, when the detection result indicates that the position of the key point changes in accordance with the downward position deviation of the head of the user along the gravity direction, the short-time head movement performed by the user for the time setting when the time setting interface is displayed is determined as the "head lowering" movement.
And when the detection result indicates that the position change of the key point corresponds to the upward position deviation of the head of the user along the gravity direction, determining the short-time head movement executed by the user for time setting when the time setting interface is displayed as the head-up movement.
Of course, in other embodiments, the action performed by the user for the time setting while the time setting interface is displayed may also be a short-time somatic action, such as a "jump up" action or a "squat down" action. In this case, the action capturing process substantially determines whether the change in the position of the key point corresponds to a position shift of the user's body in the direction of gravity, downward or upward.
Alternatively, the action performed by the user for the time setting while the time setting interface is displayed may also be a short-time facial expression action, such as a "close mouth" action or a "open mouth" action. At this time, the essence of the motion capture process is to determine whether the change in the position of the key point corresponds to the position shift of the user's mouth in the direction of gravity, either downward or upward.
Based on this, no matter the short-time body motion or the short-time facial expression motion, the principle of the capturing process is similar to the capturing process of the short-time head motion, and the description is not repeated here, and this embodiment is not a specific limitation to this configuration.
Under the effect of the embodiment, the head posture estimation processing of the image frame sequence is realized, so that the short-time head action executed by the user aiming at the time setting during the display of the time setting interface is captured and used as the basis of the time setting, the manual operation that the time setting depends on too much complexity is avoided, and the use experience of the user is effectively improved.
Fig. 13 is a schematic diagram of a specific implementation of a method for implementing time setting without touch in an application scenario. In the application scenario, the electronic device is a smart phone with a device for realizing time setting without touch, and the clock is a timer provided for the smart phone.
Along with the operation of the device for realizing time setting in a non-touch manner, a time setting interface corresponding to the timer is loaded and further displayed on a screen of the smart phone.
Therefore, the step of capturing the action executed by the user for the time setting when the time setting interface is displayed and further performing the time setting is performed, that is, step 810 to step 810 are executed, specifically, as described in the method for realizing the time setting in a non-contact manner in the above embodiments of the present invention.
With reference to fig. 14 to 15, the procedure of the non-contact time setting will be described by taking as an example that the user performs a short-time head action for time setting when the time setting interface is displayed.
The short-time head action comprises a head lowering action and a head raising action.
Assume that the initial time t0 is 30min, the minimum time value t1 is 15min, the maximum time value t2 is 45min, and the interval time (i.e., the increment time value or the decrement time value) Δ t is 15 min.
If the user performs the first "head up" action, the update time t is intermediate time t' ═ t0+ Δ t 30+15 ═ 45min, as shown in fig. 14(a) to 14(b), the time shown in the time setting interface will change from 30min to 45min, and the update initial time t0 is 45 min.
If the user performs the second "head up" motion, the intermediate time t't 0+ Δ t 45+15 is 60min, and t' 60min > t2 is 45min, so after the user performs two successive head up operations, the update time t2 is 45min, the time shown in the time setting interface is still 45min, as shown in fig. 14(b), and the initial time t0 is maintained at 45 min.
If the user performs the first "head-down" action, the update time t is intermediate time t't 0- Δ t 45-15-30 min, as shown in fig. 14(b) to 14(c), the time shown in the time setting interface will be changed from 45min to 30min, and the update initial time t0 is 30 min.
If the user performs the second "head-down" action, the update time t is intermediate time t't 0- Δ t 30-15 min, as shown in fig. 14(c) to 14(d), the time shown in the time setting interface will be changed from 30min to 15min, and the update initial time t0 is t 15 min.
If the user performs the third "head lowering" motion, the intermediate time t't 0- Δ t 15-15 is 0min, and since t' 0min < t1 is 15min, after the user lowers the head three times continuously, the updated time t1 is 15min, the time shown in the time setting interface is still 15min, as shown in fig. 14(d), the initial time t0 is maintained at t 15 min.
When the user stops executing the action, namely the time setting is finished, the time setting interface jumps to the countdown interface, the countdown started by the update time 15min is displayed in the countdown interface, as shown in fig. 15, and at this time, the countdown function of the timer is started.
In the application scene, the non-touch time setting is realized, the user performs the time setting by using simple and familiar 'head-down' action and 'head-up' action as manual operations for replacing a finger to slide a screen and the like, not only is the timer endowed with new vitality and vitality, but also the user is endowed with brand-new use experience, and the problem that the time setting depends on more complicated manual operations is effectively solved.
The following is an embodiment of the apparatus of the present invention, which can be used to perform the method for setting the non-touch realization time according to the present invention. For details that are not disclosed in the embodiments of the apparatus of the present invention, please refer to the embodiments of the method for implementing time setting without touch in the present invention.
Referring to FIG. 16, in an exemplary embodiment, an apparatus 900 for non-touch enabled time setting includes, but is not limited to: interface display module 910, image acquisition module 930, motion capture module 950, time setting module 960, and time display module 970.
The interface display module 910 is configured to display a time setting interface, where the time displayed in the time setting interface is initial time.
An image obtaining module 930, configured to obtain an image frame sequence including a plurality of image frames, where the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed.
A motion capture module 950 for performing pose estimation processing on a plurality of image frames in the image frame sequence, and capturing a motion performed by a user for a time setting when the time setting interface is displayed.
A time setting module 960, configured to set the initial time displayed in the time setting interface according to the captured action, so as to obtain the updated time.
A time display module 970, configured to display that the time is changed from the initial time to the updated time in the time setting interface.
It should be noted that, when the device for setting a non-touch implementation time provided in the foregoing embodiment performs the processing of setting a non-touch implementation time, the division of each functional module is merely used as an example, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device for setting a non-touch implementation time is divided into different functional modules to complete all or part of the functions described above.
In addition, the apparatus for setting non-touch implementation time provided by the foregoing embodiments and the embodiments of the method for setting non-touch implementation time belong to the same concept, and the specific manner in which each module performs operations has been described in detail in the method embodiments, and is not described herein again.
Referring to fig. 17, in an exemplary embodiment, an electronic device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
Wherein the memory 1002 has computer readable instructions stored thereon, the processor 1001 reads the computer readable instructions stored in the memory 1002 through the communication bus 1003.
The computer readable instructions, when executed by the processor 1001, implement the following steps, including but not limited to: displaying a time setting interface, wherein the time displayed in the time setting interface is initial time; acquiring an image frame sequence comprising a plurality of image frames, wherein the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed; performing pose estimation processing on a plurality of image frames in the image frame sequence, and capturing an action performed by a user for time setting when the time setting interface is displayed; setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time; and displaying the time in the time setting interface, wherein the time is changed from the initial time to the updating time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: performing head pose estimation processing on each image frame aiming at a plurality of image frames in the image frame sequence, and determining the pose of the head of the user in each image frame; and capturing short-time head actions performed by the user for time setting when the time setting interface is displayed according to the head gestures of the user in the image frames.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: identifying key points corresponding to the head of the user in the image frame to obtain the positions of the key points of the head of the user in the image frame; determining a pose of the user head in the image frame from the keypoint locations of the user head in the image frame.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: detecting whether the head of the user is shifted along the gravity direction according to the positions of key points of the head of the user in each image frame; and according to the detection result, determining the short-time head action executed by the user aiming at the time setting when the time setting interface is displayed.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: and adjusting the initial time according to an adjustment mode corresponding to the captured action to obtain the updating time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: determining an action state value corresponding to the captured action; and adjusting the initial time according to the adjustment mode indicated by the action state value to obtain the update time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: when the adjustment mode indicated by the action state value is time increment, acquiring an increment time value; accumulating the initial time and the incremental time value to obtain a first intermediate time; if the first intermediate time does not exceed the maximum time value, taking the first intermediate time as the updating time; otherwise, the maximum time value is used as the updating time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: when the adjustment mode indicated by the action state value is time decrement, acquiring a decrement time value; calculating the difference between the initial time and the decreasing time value to obtain a second intermediate time; if the second intermediate time is less than a minimum time value, taking the minimum time value as the updating time; otherwise, taking the second intermediate time as the updating time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: when the adjustment mode indicated by the action state value is time unadjustment, starting a time counter; and when the time value of the time counter exceeds a set threshold value, taking the initial time as the updating time.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: and when the time value of the time counter exceeds a set threshold value, jumping from the time setting interface to a countdown interface, and displaying countdown starting from the updated time in the countdown interface.
In an exemplary implementation, the processor 1001 executes the computer readable instructions and is further configured to implement steps including, but not limited to: and updating the updating time to the initial time, and returning to execute the step of acquiring the image frame sequence comprising the plurality of image frames so as to realize the cycle setting process of the display time in the time setting interface.
In an exemplary embodiment, a storage medium has a computer program stored thereon, and the computer program, when executed by a processor, implements the method of non-touch implementation time setting in the above embodiments.
The above-mentioned embodiments are merely preferred examples of the present invention, and are not intended to limit the embodiments of the present invention, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present invention, so that the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method for non-touch enabled time setting, comprising:
displaying a time setting interface, wherein the time displayed in the time setting interface is initial time;
acquiring an image frame sequence comprising a plurality of image frames, wherein the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed;
performing pose estimation processing on a plurality of image frames in the image frame sequence, and capturing an action performed by a user for time setting when the time setting interface is displayed;
setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time;
and displaying the time in the time setting interface, wherein the time is changed from the initial time to the updating time.
2. The method of claim 1, wherein the performing a pose estimation process on a plurality of image frames in the sequence of image frames, capturing an action performed by a user for a time setting while the time setting interface is displayed, comprises:
performing head pose estimation processing on each image frame aiming at a plurality of image frames in the image frame sequence, and determining the pose of the head of the user in each image frame;
and capturing short-time head actions performed by the user for time setting when the time setting interface is displayed according to the head gestures of the user in the image frames.
3. The method of claim 2, wherein said performing a head pose estimation process on each image frame to determine the pose of the user's head in each image frame comprises:
identifying key points corresponding to the head of the user in the image frame to obtain the positions of the key points of the head of the user in the image frame;
determining a pose of the user head in the image frame from the keypoint locations of the user head in the image frame.
4. The method of claim 3, wherein said capturing short-time head movements performed by the user for a time setting while the time setting interface is displayed according to the pose of the user's head in each image frame comprises:
detecting whether the head of the user is shifted along the gravity direction according to the positions of key points of the head of the user in each image frame;
and according to the detection result, determining the short-time head action executed by the user aiming at the time setting when the time setting interface is displayed.
5. The method of any of claims 1 to 4, wherein setting the initial time presented in the time setting interface based on the captured action, resulting in an updated time, comprises:
and adjusting the initial time according to an adjustment mode corresponding to the captured action to obtain the updating time.
6. The method of claim 5, wherein said adjusting the initial time according to the adjustment mode corresponding to the captured motion to obtain the updated time comprises:
determining an action state value corresponding to the captured action;
and adjusting the initial time according to the adjustment mode indicated by the action state value to obtain the update time.
7. The method as claimed in claim 6, wherein said adjusting said initial time according to the adjustment mode indicated by said action status value to obtain said update time comprises:
when the adjustment mode indicated by the action state value is time increment, acquiring an increment time value;
accumulating the initial time and the incremental time value to obtain a first intermediate time;
if the first intermediate time does not exceed the maximum time value, taking the first intermediate time as the updating time;
otherwise, the maximum time value is used as the updating time.
8. The method as claimed in claim 6, wherein said adjusting said initial time according to the adjustment mode indicated by said action status value to obtain said update time comprises:
when the adjustment mode indicated by the action state value is time decrement, acquiring a decrement time value;
calculating the difference between the initial time and the decreasing time value to obtain a second intermediate time;
if the second intermediate time is less than a minimum time value, taking the minimum time value as the updating time;
otherwise, taking the second intermediate time as the updating time.
9. The method as claimed in claim 6, wherein said adjusting said initial time according to the adjustment mode indicated by said action status value to obtain said update time comprises:
when the adjustment mode indicated by the action state value is time unadjustment, starting a time counter;
and when the time value of the time counter exceeds a set threshold value, taking the initial time as the updating time.
10. The method as claimed in claim 9, wherein said adjusting said initial time according to the adjustment mode indicated by said action status value to obtain said update time further comprises:
and when the time value of the time counter exceeds a set threshold value, jumping from the time setting interface to a countdown interface, and displaying countdown starting from the updated time in the countdown interface.
11. The method of any of claims 1-4, wherein after the presentation time in the time-setting interface changes from the initial time to the update time, the method further comprises:
and updating the updating time to the initial time, and returning to execute the step of acquiring the image frame sequence comprising the plurality of image frames so as to realize the cycle setting process of the display time in the time setting interface.
12. An apparatus for non-touch enabled time setting, comprising:
the interface display module is used for displaying a time setting interface, and the time displayed in the time setting interface is initial time;
the image acquisition module is used for acquiring an image frame sequence containing a plurality of image frames, and the plurality of image frames in the image frame sequence are generated by shooting a user when the time setting interface is displayed;
the motion capture module is used for carrying out posture estimation processing on a plurality of image frames in the image frame sequence and capturing the motion executed by a user for time setting when the time setting interface is displayed;
the time setting module is used for setting the initial time displayed in the time setting interface according to the captured action to obtain the updating time;
and the time display module is used for displaying that the time is changed from the initial time to the updating time in the time setting interface.
13. The apparatus of claim 12, wherein the motion capture module comprises:
the posture estimation unit is used for carrying out head posture estimation processing on each image frame aiming at a plurality of image frames in the image frame sequence and determining the posture of the head of the user in each image frame;
and the head action capturing unit is used for capturing short-time head actions performed by the user for time setting when the time setting interface is displayed according to the postures of the head of the user in the image frames.
14. An electronic device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement a method of non-touch enabled time setting as claimed in any of claims 1 to 11.
15. A storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, implements a method of non-touch-enabled time setting as claimed in any one of claims 1 to 11.
CN201911368129.7A 2019-12-26 2019-12-26 Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium Pending CN111176448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911368129.7A CN111176448A (en) 2019-12-26 2019-12-26 Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911368129.7A CN111176448A (en) 2019-12-26 2019-12-26 Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111176448A true CN111176448A (en) 2020-05-19

Family

ID=70655721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911368129.7A Pending CN111176448A (en) 2019-12-26 2019-12-26 Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111176448A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793481A (en) * 2014-01-22 2015-07-22 巨擘科技股份有限公司 Time adjusting method and system of wristwatch
AU2015101020A4 (en) * 2014-08-02 2015-09-10 Apple Inc. Context-specific user interfaces
CN105549882A (en) * 2015-12-09 2016-05-04 魅族科技(中国)有限公司 Time setting method and mobile terminal
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN107703728A (en) * 2017-10-18 2018-02-16 广东乐芯智能科技有限公司 A kind of watch hand to predeterminated position method
CN108197534A (en) * 2017-12-19 2018-06-22 迈巨(深圳)科技有限公司 A kind of head part's attitude detecting method, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793481A (en) * 2014-01-22 2015-07-22 巨擘科技股份有限公司 Time adjusting method and system of wristwatch
AU2015101020A4 (en) * 2014-08-02 2015-09-10 Apple Inc. Context-specific user interfaces
CN105549882A (en) * 2015-12-09 2016-05-04 魅族科技(中国)有限公司 Time setting method and mobile terminal
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN107703728A (en) * 2017-10-18 2018-02-16 广东乐芯智能科技有限公司 A kind of watch hand to predeterminated position method
CN108197534A (en) * 2017-12-19 2018-06-22 迈巨(深圳)科技有限公司 A kind of head part's attitude detecting method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200026920A1 (en) Information processing apparatus, information processing method, eyewear terminal, and authentication system
CN105574484B (en) Electronic device and method for analyzing face information in electronic device
CN108153424B (en) Eye movement and head movement interaction method of head display equipment
EP2634727B1 (en) Method and portable terminal for correcting gaze direction of user in image
WO2021179773A1 (en) Image processing method and device
US20160239081A1 (en) Information processing device, information processing method, and program
US10602076B2 (en) Method for combining and providing image, obtained through a camera, electronic device, and storage medium
CN112118380B (en) Camera control method, device, equipment and storage medium
CN109375765B (en) Eyeball tracking interaction method and device
US20200103967A1 (en) Pupil Modulation As A Cognitive Control Signal
EP2981935A1 (en) An apparatus and associated methods
KR20160092256A (en) Image processing method and electronic device supporting the same
US9965029B2 (en) Information processing apparatus, information processing method, and program
CN109600555A (en) A kind of focusing control method, system and photographing device
CN111045577A (en) Horizontal and vertical screen switching method, wearable device and device with storage function
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN104035544A (en) Method for controlling electronic device and electronic device
CN109246292A (en) A kind of moving method and device of terminal desktop icon
CN112099639A (en) Display attribute adjusting method and device, display equipment and storage medium
CN111176448A (en) Method and device for realizing time setting in non-touch mode, electronic equipment and storage medium
CN106371552B (en) Control method and device for media display at mobile terminal
US8736705B2 (en) Electronic device and method for selecting menus
CN110018733A (en) Determine that user triggers method, equipment and the memory devices being intended to
CN111506192A (en) Display control method and device, mobile terminal and storage medium
CN104156138A (en) Shooting controlling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519