CN112306602A - Timing method, timing device, electronic equipment and storage medium - Google Patents

Timing method, timing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112306602A
CN112306602A CN202011164423.9A CN202011164423A CN112306602A CN 112306602 A CN112306602 A CN 112306602A CN 202011164423 A CN202011164423 A CN 202011164423A CN 112306602 A CN112306602 A CN 112306602A
Authority
CN
China
Prior art keywords
timing
image
job
task
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011164423.9A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202011164423.9A priority Critical patent/CN112306602A/en
Publication of CN112306602A publication Critical patent/CN112306602A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The embodiment of the disclosure discloses a timing method, a timing device, an electronic device and a storage medium, wherein the method comprises the following steps: responding to the triggering operation of a starting control in a first interface, collecting a job image, and identifying a job task for the job image; and timing is carried out based on the identified job task, and a timing result is displayed on a second interface. According to the technical scheme of the embodiment of the disclosure, the operation images are identified, and timing is carried out according to the operation tasks obtained through identification, so that timing can be carried out on each operation task, and the improvement of learning efficiency and user experience are facilitated.

Description

Timing method, timing device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, and in particular, to a timing method, a timing device, an electronic device and a storage medium.
Background
Based on common Application (APP) products on the market, the method of timing the homework task of a student is generally adopted as follows: and respectively responding to the triggering operation of a starting control and a stopping control in the APP interface triggered by the user, and starting timing and stopping timing.
The disadvantages include at least: in the process of writing operation, a user may need to complete multiple operation tasks, and therefore the start control and the stop control need to be triggered repeatedly to time the tasks, so that the learning process is always interrupted, the learning efficiency is reduced, and the user experience is poor.
Disclosure of Invention
The embodiment of the disclosure provides a timing method, a timing device, an electronic device and a storage medium, which can automatically time each job task and are beneficial to improving learning efficiency and user experience.
In a first aspect, an embodiment of the present disclosure provides a timing method, including:
responding to the triggering operation of a starting control in a first interface, collecting a job image, and identifying a job task for the job image;
and timing is carried out based on the identified job task, and a timing result is displayed on a second interface.
In a second aspect, an embodiment of the present disclosure further provides a timing device, including:
the identification module is used for responding to the triggering operation of the starting control in the first interface, acquiring a job image and identifying a job task of the job image;
and the timing module is used for timing based on the identified operation task and displaying a timing result on the second interface.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a timing method as in any of the embodiments of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions for performing the timing method according to any one of the disclosed embodiments when executed by a computer processor.
According to the technical scheme of the embodiment of the disclosure, a job image is collected in response to the triggering operation of a starting control in a first interface, and job task identification is carried out on the job image; and timing is carried out based on the identified job task, and a timing result is displayed on a second interface. According to the technical scheme of the embodiment of the disclosure, the operation images are identified, and timing is carried out according to the operation tasks obtained through identification, so that timing can be carried out on each operation task, and the improvement of learning efficiency and user experience are facilitated.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a timing method according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of a first interface in a timing method according to a first embodiment of the disclosure;
fig. 3 is a schematic diagram of a second interface in a timing method according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a third interface in a timing method according to an embodiment of the disclosure;
fig. 5 is a schematic flowchart of a timing method according to a second embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a timing device according to a third embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Example one
Fig. 1 is a schematic flow chart of a timing method provided in an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation where a client application is used to time various job tasks, the method may be executed by a timing device, the device may be implemented in a form of software and/or hardware, the device may be integrated in the client application, and may be installed in an electronic device along with the application, for example, in an electronic device such as a mobile phone, a tablet computer, a notebook computer, or a desktop computer.
As shown in fig. 1, the timing method provided in this embodiment includes:
and S110, responding to the trigger operation of the starting control in the first interface, collecting a job image, and identifying a job task for the job image.
In the embodiment of the present disclosure, the timing method may be applied to a client Application (APP), where the client Application may be a learning Application (e.g., a job Application). When the application is installed in an electronic device such as a smartphone or a computer, the electronic device may start the application in response to a user start operation, for example, the electronic device may start the application in response to a user operation of clicking an application icon.
After the application is started, the first interface can be used as a home page interface of the application, a sub-level interface of the home page interface and the like, and a start control can be displayed in the first interface. When the user is ready to start the writing operation, the start control can be triggered, so that the application can automatically time for each operation task in the writing operation process of the user.
In this embodiment, the application may establish a communication connection with an image capturing device (for example, a camera) in advance, and when the application detects a trigger operation acting on the start control in the first interface, the image capturing device may be started to capture an image by using the image capturing device. In order to enable the collected operation images to better present the operation content, the image collecting device may be an overhead image collecting device deployed on a desk top capable of taking a video image, for example, an overhead camera of an intelligent desk lamp.
The collected operation image may be an image of a collection area of the image collection device, or an image obtained by preprocessing the image of the collection area. The job images for identification may be acquired in real time, or may be a preset number of job images screened from the job images acquired in real time. By identifying the operation images acquired in real time, the current operation tasks of the user can be determined in real time, and the timing accuracy of each operation task is improved. In addition, the screened job images are identified, so that processing resources consumed by application can be reduced and application fluency can be improved on the basis that real-time job tasks are determined to a certain extent.
The job task may be the name of the book to which the job belongs, such as an arithmetic practice book, a dictation exercise book, and a cold and fake exercise.
The process of performing job task recognition on the job image may be, for example, performing job task recognition directly from the job image. For example, a job task corresponding to a matched cover image is used as a job task of a captured job image by matching the captured job image including a book cover with a cover image in a cover image library.
Further, the job task recognition on the job image may include, for example: and performing character recognition on the operation image, and inquiring an operation task matched with the recognized characters based on a preset database.
The work image may be subjected to Character Recognition by an Optical Character Recognition (OCR) method and/or an OpenCV Character Recognition and detection method. All characters recognized by the whole operation image can be matched with character data in each book stored in a preset database, and an operation task corresponding to the recognized characters is determined according to the matching degree. And each book stored in the preset database can be updated in real time. And determining the job task corresponding to the recognized characters according to the matching degree, for example, judging whether the maximum matching degree is greater than a preset value (for example, 95%), and if so, taking the book name corresponding to the maximum matching degree as the job task corresponding to the recognized characters.
And S120, timing based on the identified job task, and displaying a timing result on a second interface.
In the embodiment of the present disclosure, the job image used for recognition may be regarded as a job image that can satisfy the requirement of determining the job task in real time. By determining the job task currently completed by the user in real time, the identified job task can be timed and updated, so that each job task can be timed. By displaying the timing result (namely the current updated timing) on the second interface, the user can be helped to intuitively know the actual time consumption situation of the current operation task, and the user experience is improved.
In some optional embodiments, timing based on the identified job task includes: judging whether the current determined job task is the same as the previously determined job task; if yes, timing and accumulating the currently determined job task; if not, starting new timing for the currently determined job task, and taking the accumulated result of the original timing as the actual consumed time of the previously determined job task.
The process of timing each job task may be regarded as a process of cyclically performing timing accumulation, and may be, for example: each time a new job task is determined, a new timing can be started, and the new timing can be increased numerically (i.e. timing accumulation) along with the standard time until another new job task is determined, the original timing accumulation can be stopped, the accumulated result of the original timing can be used as the actual consumed time of the original job task, and then a new timing is restarted and the new job task is timed and accumulated.
In these optional real-time modes, the current job task is timed and accumulated when the currently determined job task is judged to be the same as the previously determined task, and a new timing accumulation is started for the current job task when the currently determined job task is judged to be different from the previously determined task, so that the automatic timing of the job task can be realized.
In some optional embodiments, the timing method may further include: and responding to the trigger operation of the completion control in the second interface, determining the actual consumed time of each job task, and displaying the determined actual consumed time on the third interface.
When the last job task is completed, the user can trigger a completion control in the second interface, so that the application stops timing accumulation of the last job task and stops the cycle timing process. The application can also count the actual consumed time of each operation task in response to the trigger operation of the finished control, and display the consumed time by using the third interface, so that a user can be helped to know the consumed time situation of each operation task, the learning method can be improved in a more targeted manner, and the learning efficiency is improved.
Further, when job task recognition is performed on a job image, discipline recognition can also be performed. For example, the characters obtained by recognizing the operation images may be input into a pre-trained model, and the discipline corresponding to the recognized characters may be output based on the pre-trained model, and the pre-trained model may be a classification model such as a neural network model; for another example, the text obtained by recognizing the job image may be matched with text data in a book stored in a preset database, and the subject may be determined according to the matched book.
Furthermore, after determining the actual consumed time of each job task, the application may also summarize the actual consumed time of each subject, and display the determined actual consumed time of each subject on the third interface. The actual time consumption of each subject is displayed, so that the user can be helped to know the partial condition, and the subject learning can be performed more pertinently.
Fig. 2 is a schematic diagram of a first interface in a timing method according to a first embodiment of the disclosure.
Referring to fig. 2, the application to which the first interface belongs may be an application that assists the user in doing a job. The first interface of the application may be shown with a start control 10, where the start control may be a button control or other type of control, and a prompt document such as "start doing work" may be added to the start control to prompt the user to be ready to start learning, and the start control may be triggered. And the application can respond to the triggering operation of the starting control 10 in the first interface, collect the job image, and perform the identification of the job task on the job image. In addition, the first interface can also present a schematic diagram of a camera for collecting desk top pictures so as to indicate that a user places learning materials in a camera collecting range and ensure that an operation image is smoothly identified.
For example, fig. 3 is a schematic diagram of a second interface in a timing method according to a first embodiment of the disclosure.
Referring to fig. 3, a timing control 201 may be provided in the second interface, which may be used to prompt actual elapsed time of the currently identified job task, and may perform timing again when the application identifies a new job task. In addition, the second interface can also display at least one of the following controls: job task prompt control 202, plan time consuming prompt control 203, completion control 204, and presentation control 205.
The job task prompt control 202 may be configured to prompt a currently identified job task, such as "language exercise book". A plan time consumption prompt control 203, which can be used for displaying the edited plan time consumption; and when the planning time is not edited, the application can jump to the editing floating layer in response to the triggering operation of the planning time-consuming prompt control 203 so as to edit the planning time. The completion control 204 may be configured to determine an actual consumed time of each job task when triggered, and display the determined actual consumed time on the third interface. And the display control 205 can be used for displaying the collected job image.
In addition, if the time consumption of the job task identification process is long, the situation that the first interface jumps late and cannot jump to the second interface easily occurs, and a user cannot perceive whether the job task identification is currently performed, so that the user experience is influenced to a certain extent. Therefore, in the process of performing job task identification on the job image, the method may further include: the job image is shown on a fourth interface, and a recognition animation (such as an up-down scan line animation) is rendered on the shown job image. Therefore, the user can be prompted that the application is carrying out job task recognition, and the interaction experience of the user is improved. And after the learning content is successfully identified, jumping to the second interface from the fourth interface to realize the timing display of the job task.
For example, fig. 4 is a schematic diagram of a third interface in a timing method according to a first embodiment of the disclosure.
Referring to fig. 4, the third interface may include, but is not limited to: a presentation area 401 containing time-consuming information in which at least one of the following may be presented: the actual time consumption of each operation task, the total time consumption of each subject, the total time consumption of the learning process and other information can also be used for carrying out learning suggestion according to the time consumption of each subject and carrying out rewarding (such as energy rewarding, small safflower rewarding and the like) according to the length of the total time consumption; the user feedback control 402 may be configured to receive state feedback of the user after completing the learning process, for example, but not limited to "fighting", "happy", and "refueling continuously", etc.; an exit control 403 is available to exit the application when triggered.
According to the technical scheme of the embodiment of the disclosure, the triggering operation of the starting control in the first interface is responded, the operation image is collected, and the operation task identification is carried out on the operation image; and timing is carried out based on the identified job task, and a timing result is displayed on a second interface. According to the technical scheme of the embodiment of the disclosure, the operation images are identified, and timing is carried out according to the operation tasks obtained through identification, so that timing can be carried out on each operation task, and the improvement of learning efficiency and user experience are facilitated.
Example two
The present embodiment and the various alternatives in the timing method provided in the above embodiments may be combined. The timing method provided by the embodiment optimizes the acquisition steps of the operation images, can acquire and preprocess the images according to the preset rules to obtain the operation images, and is beneficial to improving the real-time identification effect and the identification accuracy of the operation tasks.
Fig. 5 is a flowchart illustrating a timing method according to a second embodiment of the disclosure. Referring to fig. 5, the timing method provided in this embodiment includes:
and S510, responding to the trigger operation of the starting control in the first interface, and acquiring the first area image according to a preset rule.
Wherein, the first area image can be regarded as an image of an acquisition area of the image acquisition device.
The first area image is collected according to a preset rule, for example, the first area image may be collected every preset time (for example, 1s, 3s, and the like), so that resources consumed by identification are reduced on the basis of meeting a real-time identification task. The first area image is collected according to a preset rule, if the image quality of the first area image can be evaluated every other preset time, and when the image quality evaluation reaches the standard, the collected first area image is used for identifying the operation task according to the operation image with high image quality (such as good definition and better exposure), so that the identification accuracy is improved. In addition, other preset rules may also be applied herein, and are not exhaustive here.
In some optional implementations, acquiring the first region image includes: acquiring a first area image every other preset time; and/or acquiring the image of the first area every time the edge of the target object is detected to enter the first area.
The target object may be, for example, an object such as a book; wherein the edge of the target object may be detected based on an edge detection filter (e.g., Sobel operator, Prewitt operator, Roberts operator, or the like). When the length of the edge entering the first region increases, or the area constituted by the edge increases, it can be considered that the target object enters the first region.
In these optional embodiments, in addition to acquiring the first area image every preset time, the first area image may be acquired when it is detected that the edge of the target object enters the acquisition area, so that the application can acquire the first area image when the student changes the homework but the preset time is not reached, thereby further improving the accuracy of timing.
S520, preprocessing the first area image to obtain a job image.
After the first area image is acquired, the first area image can be preprocessed to correct and/or highlight the operation part in the first area image, so that the operation task identification is facilitated.
In some optional implementations, the pre-processing the first region image includes: determining a target corner point in the first area image, and cutting the first area image according to the target corner point; and/or, correcting the first area image; and/or performing image enhancement on the first area image.
The determination method of the target corner may be to perform corner detection based on a gray scale feature (e.g., a gradient feature) of the first region image, or perform corner detection based on a binary feature of the first region image, or perform corner detection based on a contour curve of the first region image; determining the target corner points according to the detection result, for example, each detected corner point may be used as the target corner point, or a part of corner points may be selected from each detected corner point to be used as the target corner points.
The process of screening the corner points of the part may be, for example, determining a matching degree between each corner point and a corner point in the preset corner point model, and screening the corner points with high matching degree as target corner points. The preset corner point model may be, for example, an opened page-shaped model including six corner points, or another model.
Identifying the positions of all corner points of the operation part from the first area image can be realized by determining the target corner points in the first area image; the first area image is cut according to the target angle point, so that the job image of each page in the first area image is divided, character recognition in each page of the job image is recognized, the job task is inquired from the preset database according to the characters of the whole page, and the recognition accuracy of the job task is improved.
The correction of the first area image includes, but is not limited to, performing rectangular correction to remove distortion, and/or performing rotation correction to adjust an angle of the image. By correcting the image, the accuracy of character recognition is improved.
The enhancement processing performed on the first region image includes, but is not limited to, image sharpening processing performed by a gradient method, a high-pass filtering method, a mask matching method, or the like. By performing image enhancement processing, the characteristics of characters in the image can be more prominent, and the success rate of character recognition can be improved.
In some preferred implementation manners, the image of the first area may be cut according to the identified target angle point, and then the cut image may be corrected and image enhanced to obtain an operation image, so that the success rate and accuracy rate of character identification may be greatly improved.
And S530, performing job task identification on the job image.
And S540, timing based on the identified job task, and displaying a timing result on a second interface.
According to the technical scheme of the embodiment of the disclosure, the acquisition steps of the operation images are optimized, the images can be acquired according to the preset rule and preprocessed to obtain the operation images, and the real-time identification effect and the identification accuracy rate of the operation tasks are favorably improved. In addition, the timing method provided by the embodiment of the present disclosure and the timing method provided by the above embodiment belong to the same technical concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the same technical features have the same beneficial effects in the embodiment and the above embodiment.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a timing device according to a third embodiment of the disclosure. The timing device provided by the embodiment is suitable for timing each job task by using the client application. As shown in fig. 6, the timepiece device includes:
the identification module 610 is configured to, in response to a trigger operation of a start control in the first interface, acquire a job image, and perform job task identification on the job image;
the timing module 620 is configured to perform timing based on the identified job task, and display a timing result on the second interface.
In some optional implementations, the identification module includes:
the image determining submodule is used for acquiring the first area image according to a preset rule; and preprocessing the first area image to obtain a working image.
In some optional implementations, the image determination sub-module includes:
the image acquisition unit is used for acquiring the first area image every preset time; and/or acquiring the image of the first area every time the edge of the target object is detected to enter the first area.
In some optional implementations, the image determination sub-module includes:
the image processing unit is used for determining a target corner in the first area image and cutting the first area image according to the target corner; and/or, correcting the first area image; and/or performing image enhancement on the first area image.
In some optional implementations, the identification module includes:
and the recognition submodule is used for carrying out character recognition on the operation image and inquiring the operation task matched with the recognized characters based on the preset database.
In some optional implementation manners, the timing module is specifically configured to determine whether a currently determined job task is the same as a previously determined job task; if yes, timing and accumulating the currently determined job task; if not, starting new timing for the currently determined job task, and taking the accumulated result of the original timing as the actual consumed time of the previously determined job task.
In some optional implementations, the timing device further includes:
and the time consumption display module is used for responding to the triggering operation of the finished control in the second interface, determining the actual time consumption of each operation task, and displaying the determined actual time consumption on the third interface.
The timing device provided by the embodiment of the disclosure can execute the timing method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Example four
Referring now to fig. 7, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 7) 100 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 100 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 101, which may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 102 or a program loaded from a storage means 106 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data necessary for the operation of the electronic apparatus 100 are also stored. The processing device 101, the ROM102, and the RAM 103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
Generally, the following devices may be connected to the I/O interface 105: input devices 106 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 107 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 108 including, for example, magnetic tape, hard disk, etc.; and a communication device 109. The communication means 109 may allow the electronic device 100 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 109, or installed from the storage means 106, or installed from the ROM 102. The computer program, when executed by the processing device 101, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
The electronic device provided by the embodiment of the present disclosure and the timing method provided by the above embodiment belong to the same disclosure concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
EXAMPLE five
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the timing method provided by the above embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or FLASH Memory (FLASH), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to the triggering operation of a starting control in the first interface, acquiring a job image, and identifying a job task for the job image; and timing is carried out based on the identified job task, and a timing result is displayed on a second interface.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), and/or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The names of the units and modules do not limit the units and modules in some cases, and for example, "identification module" may also be described as "job task identification module".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Part (ASSP), a System On Chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a timing method, the method comprising:
responding to the triggering operation of a starting control in a first interface, collecting a job image, and identifying a job task for the job image;
and timing is carried out based on the identified job task, and a timing result is displayed on a second interface.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a timing method, further comprising:
in some optional implementations, the capturing a job image includes:
acquiring a first area image according to a preset rule;
and preprocessing the first area image to obtain a working image.
According to one or more embodiments of the present disclosure, [ example three ] there is provided a timing method, further comprising:
in some optional implementations, the acquiring, according to a preset rule, the first region image includes:
acquiring a first area image every other preset time; and/or the presence of a gas in the gas,
and acquiring a first area image when the edge of the target object is detected to enter the first area.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a timing method, further comprising:
in some optional implementations, the pre-processing the first region image includes:
determining a target corner point in the first area image, and cutting the first area image according to the target corner point; and/or the presence of a gas in the gas,
rectifying the first area image; and/or the presence of a gas in the gas,
and performing image enhancement on the first region image.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a timing method further comprising:
in some optional implementations, the performing job task identification on the job image includes:
and performing character recognition on the operation image, and inquiring an operation task matched with the recognized characters based on a preset database.
According to one or more embodiments of the present disclosure, [ example six ] there is provided a timing method further comprising:
in some optional implementations, the timing based on the identified job task includes:
judging whether the current determined job task is the same as the previously determined job task;
if yes, timing and accumulating the currently determined job task;
if not, starting new timing for the currently determined job task, and taking the accumulated result of the original timing as the actual consumed time of the previously determined job task.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a timing method, further comprising:
and responding to the trigger operation of the completion control in the second interface, determining the actual consumed time of each job task, and displaying the determined actual consumed time on a third interface.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method of timing, comprising:
responding to the triggering operation of a starting control in a first interface, collecting a job image, and identifying a job task for the job image;
and timing is carried out based on the identified job task, and a timing result is displayed on a second interface.
2. The method of claim 1, wherein said capturing a job image comprises:
acquiring a first area image according to a preset rule;
and preprocessing the first area image to obtain a working image.
3. The method according to claim 2, wherein the acquiring the first area image according to the preset rule comprises:
acquiring a first area image every other preset time; and/or the presence of a gas in the gas,
and acquiring a first area image when the edge of the target object is detected to enter the first area.
4. The method of claim 2, wherein the pre-processing the first region image comprises:
determining a target corner point in the first area image, and cutting the first area image according to the target corner point; and/or the presence of a gas in the gas,
rectifying the first area image; and/or the presence of a gas in the gas,
and performing image enhancement on the first region image.
5. The method of claim 1, wherein said performing job task recognition on said job image comprises:
and performing character recognition on the operation image, and inquiring an operation task matched with the recognized characters based on a preset database.
6. The method of claim 1, wherein timing based on the identified job task comprises:
judging whether the current determined job task is the same as the previously determined job task;
if yes, timing and accumulating the currently determined job task;
if not, starting new timing for the currently determined job task, and taking the accumulated result of the original timing as the actual consumed time of the previously determined job task.
7. The method of claim 1, further comprising:
and responding to the trigger operation of the completion control in the second interface, determining the actual consumed time of each job task, and displaying the determined actual consumed time on a third interface.
8. A time keeping device, comprising:
the identification module is used for responding to the triggering operation of the starting control in the first interface, acquiring a job image and identifying a job task of the job image;
and the timing module is used for timing based on the identified operation task and displaying a timing result on the second interface.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the timing method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the timing method of any one of claims 1-7 when executed by a computer processor.
CN202011164423.9A 2020-10-27 2020-10-27 Timing method, timing device, electronic equipment and storage medium Pending CN112306602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011164423.9A CN112306602A (en) 2020-10-27 2020-10-27 Timing method, timing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011164423.9A CN112306602A (en) 2020-10-27 2020-10-27 Timing method, timing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112306602A true CN112306602A (en) 2021-02-02

Family

ID=74332030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011164423.9A Pending CN112306602A (en) 2020-10-27 2020-10-27 Timing method, timing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112306602A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254096A (en) * 2021-05-10 2021-08-13 维沃移动通信有限公司 Timing control using method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391742A (en) * 2017-08-09 2017-11-24 上海斐讯数据通信技术有限公司 A kind of article reads recording method and the system of progress
CN108460707A (en) * 2018-03-12 2018-08-28 林为庆 A kind of the operation intelligent supervision method and its system of student
CN109147444A (en) * 2018-09-27 2019-01-04 广东小天才科技有限公司 A kind of study situation feedback method and intelligent desk lamp
CN109242736A (en) * 2018-09-27 2019-01-18 广东小天才科技有限公司 A kind of method and system for the study situation for assisting teacher to understand student
CN109544417A (en) * 2018-11-26 2019-03-29 广东小天才科技有限公司 A kind of learning effect determines method, apparatus, storage medium and terminal device
CN111081092A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Learning content output method and learning equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391742A (en) * 2017-08-09 2017-11-24 上海斐讯数据通信技术有限公司 A kind of article reads recording method and the system of progress
CN108460707A (en) * 2018-03-12 2018-08-28 林为庆 A kind of the operation intelligent supervision method and its system of student
CN109147444A (en) * 2018-09-27 2019-01-04 广东小天才科技有限公司 A kind of study situation feedback method and intelligent desk lamp
CN109242736A (en) * 2018-09-27 2019-01-18 广东小天才科技有限公司 A kind of method and system for the study situation for assisting teacher to understand student
CN109544417A (en) * 2018-11-26 2019-03-29 广东小天才科技有限公司 A kind of learning effect determines method, apparatus, storage medium and terminal device
CN111081092A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Learning content output method and learning equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254096A (en) * 2021-05-10 2021-08-13 维沃移动通信有限公司 Timing control using method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11611846B2 (en) Generation, curation, and presentation of media collections
US20170249306A1 (en) Methods and systems for generation, curation, and presentation of media collections
WO2023087741A1 (en) Defect detection method and apparatus, and electronic device, storage medium and computer program product
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN103729120A (en) Method for generating thumbnail image and electronic device thereof
CN110929070A (en) Image processing method, image processing device, electronic equipment and storage medium
US10394622B2 (en) Management system for notifications using contextual metadata
CN111310815A (en) Image recognition method and device, electronic equipment and storage medium
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN111488759A (en) Image processing method and device for animal face
CN111563398A (en) Method and device for determining information of target object
CN115311178A (en) Image splicing method, device, equipment and medium
CN110347875B (en) Video scene classification method and device, mobile terminal and storage medium
CN112306602A (en) Timing method, timing device, electronic equipment and storage medium
CN111507123B (en) Method and device for placing reading materials, reading equipment, electronic equipment and medium
CN115134533B (en) Shooting method and equipment for automatically calling vehicle-mounted image acquisition device
CN112306601A (en) Application interaction method and device, electronic equipment and storage medium
CN111783632A (en) Face detection method and device for video stream, electronic equipment and storage medium
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN115546487A (en) Image model training method, device, medium and electronic equipment
CN112308511A (en) Task plan generation method and device, electronic equipment and storage medium
CN112308067A (en) Image processing method and device, electronic equipment and storage medium
CN111435525A (en) Reading plan determining method, device, equipment, server and storage medium
CN111340813A (en) Image instance segmentation method and device, electronic equipment and storage medium
CN112231023A (en) Information display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination