CN113031846B - Method and device for displaying description information of task and electronic equipment - Google Patents

Method and device for displaying description information of task and electronic equipment Download PDF

Info

Publication number
CN113031846B
CN113031846B CN202110313791.3A CN202110313791A CN113031846B CN 113031846 B CN113031846 B CN 113031846B CN 202110313791 A CN202110313791 A CN 202110313791A CN 113031846 B CN113031846 B CN 113031846B
Authority
CN
China
Prior art keywords
task
target
displaying
description information
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110313791.3A
Other languages
Chinese (zh)
Other versions
CN113031846A (en
Inventor
贾冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110313791.3A priority Critical patent/CN113031846B/en
Publication of CN113031846A publication Critical patent/CN113031846A/en
Application granted granted Critical
Publication of CN113031846B publication Critical patent/CN113031846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The disclosure provides a method, a device, an electronic device and a storage medium for displaying description information of a task, wherein the method for displaying the description information of the task comprises the following steps: under the condition that the first task is in a processing completion state, the electronic equipment enters a task display mode, acquires a second task under the task display mode, and displays description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of the at least one polyhedron, and at least one edge of the polyhedron in the projections is used as boundaries of the plurality of display areas; displaying the description information of the second task includes: and extracting at least part of the description information of the second task, and adapting and displaying at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas. The embodiment of the application can improve the interaction experience of the user.

Description

Method and device for displaying description information of task and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying description information of a task, an electronic device, and a storage medium.
Background
With the improvement of living standard of people, various electronic devices (such as mobile phones and tablet computers) have become an indispensable part of life of people. Electronic devices can enrich people's lives by handling various tasks, such as playing videos. In actual use, besides being used for processing various tasks, the electronic equipment can be used for showing related profiles of different tasks, so that a user can know the content of the tasks which are touched or about to be touched.
However, the existing method for displaying the related brief introduction of the task is single, for example, when the brief introduction of the learning course task is displayed, the course name is usually displayed only in a list form, so that the existing display form is single and boring, which is not favorable for the user's experience.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for displaying description information of a task, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for displaying description information of a task, where the description information includes at least one of a picture, a video, and a text, the task is in one of a pending state, a processing state, and a processing completion state, and the method includes:
in a task processing mode in which a first task is in the in-process state, the electronic device is operated, wherein at least one task needs to be processed;
the electronic equipment is operated in the task processing mode, and first input of a user is received;
in response to the received first input, executing the first task, the executing the first task including ending the first task such that the first task is in the process complete state, and causing the electronic device to enter a task presentation mode;
under the condition that the first task is in the processing completion state, the electronic equipment enters the task display mode;
acquiring a second task, and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the displaying the description information of the second task includes:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
In the embodiment of the disclosure, the display interface comprises a plurality of display areas, and the display areas are projections of at least one polyhedron on the display interface, so that the display interface is patterned, and compared with the existing list display, the display effect of the display interface is enriched. In addition, the description information of the second task is adapted to the target display area and displayed, so that the description information is matched with the display area, and the visual experience of a user is improved.
In a possible embodiment, the polyhedron is a cube or a cylinder or a pyramid, and the plurality of polyhedrons projected onto the display area are of the same type.
In the embodiment of the disclosure, the polyhedron is a cube, a cylinder or a cone, so that the projection is still stereoscopic after being projected to the display area, and the visual experience of a user is improved. In addition, the same display area is formed by the projection of polyhedrons of the same type, so that the display area is tidy and does not appear messy.
According to the first aspect, in a possible implementation manner, the extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task with a target display area includes at least one of:
extracting at least part of the characters, performing attribute adaptation on the at least part of the characters and the target display area to obtain target characters, and displaying the target characters on the target display area; the attribute adaptation comprises at least one of position adaptation, color adaptation, shape adaptation and size adaptation; alternatively, the first and second electrodes may be,
extracting at least part of the picture, performing attribute adaptation on the at least part of the picture and the target display area to obtain a target picture, and displaying the target picture on the target display area; alternatively, the first and second electrodes may be,
extracting at least part of the video, performing attribute adaptation on the at least part of the video and the target display area to obtain a target video, and displaying the target video on the target display area.
In the embodiment of the disclosure, in the display process, the image, the text or the video is respectively adapted to the attribute of the target display area, so that the finally displayed image, the text or the video is matched with the target display area, and the visual experience of a user is further improved.
According to the first aspect, in a possible implementation manner, the extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task with a target display area further includes:
and performing attribute adaptation on at least two of the target characters, the target pictures and the target videos.
In the embodiment of the present disclosure, by performing the attribute adaptation on at least two of the target text, the target picture, and the target video, the display effects on the same display area can be matched without being too obtrusive, and the visual effect is further improved.
According to the first aspect, in a possible implementation manner, the second task is a plurality of tasks, and different target texts, different target pictures, or different target videos are used for respectively representing different second tasks;
the displaying the target text on the target display area comprises:
respectively displaying different target characters on different surfaces of the target display area;
the displaying the target picture on the target display area includes:
respectively displaying different target pictures on different surfaces of the target display area;
displaying the target video on the target display area, including:
and respectively displaying different target videos on different surfaces of the target display area.
In the embodiment of the disclosure, when the number of the second tasks is multiple, different target videos, target characters or target pictures are respectively displayed on different faces of the target display area, so that different second tasks can be distinguished through different faces, and thus a user can browse conveniently.
In a possible implementation manner, the second tasks represented by the target text, the target picture or the target video displayed on the same display area in the target display area have common attribute information.
In the embodiment of the disclosure, the second tasks with the common attribute information are displayed on the same display area, so that the user can clear the relationship between different second tasks, and the user can conveniently review the second tasks.
According to the first aspect, in one possible implementation, the obtaining the second task includes:
acquiring at least one second task from a plurality of tasks according to the income information of each task; the revenue information is related to task content for each of the tasks.
In the embodiment of the disclosure, in the process of obtaining the second task, the second task to be displayed is selected according to the income information of each task, so that the task with value can be displayed to give the greatest attraction to the user.
In a possible implementation according to the first aspect, the target text, the target picture or the target video has link attribute information;
the electronic equipment is operated by the electronic equipment in the task display mode, and second input of a user for the target characters, the target pictures or the target videos is received;
and controlling the electronic equipment to switch from the task display mode to the task processing mode in response to the received second input of the user.
In the embodiment of the disclosure, because the target text, the target picture or the target video has the link attribute information, the task processing mode can be directly entered through the link, so that the operation of the user is facilitated, and the interaction experience of the user is improved.
In a possible implementation manner, the description information further includes an indication element, and the indication element is used for characterizing identity information of the user; the method further comprises the following steps:
and extracting the indication elements and displaying the indication elements in other areas of the display interface except the display area.
In the embodiment of the disclosure, the indication element for representing the identity information of the user is displayed, and the interaction relationship (such as exploration and learning) existing between the user and the task can be revealed, so that the curiosity of the user is stimulated and the shared situation of the user is aroused.
According to the first aspect, in one possible implementation, the description information further includes task evaluation information; the method further comprises the following steps:
and extracting the task evaluation information and displaying the task evaluation information on the display interface.
In the embodiment of the disclosure, the task evaluation information is also displayed, so that the user can clearly know the general view of the task, and the time spent by the user for searching the related information is saved.
In a second aspect, an embodiment of the present disclosure provides an apparatus for presenting description information of a task, where the description information includes at least one of a picture, a video, and a text, and the task is in one of a pending state, a processing state, and a processing completion state, the apparatus including:
a first processing module, configured to process at least one task in a task processing mode, where a first task is in the processing state;
the first receiving module is used for receiving a first input of a user in the task processing mode;
a task execution module to execute the first task in response to the received first input, the executing the first task including ending the first task such that the first task is in the process complete state;
the second processing module is used for controlling the electronic equipment to switch between the task processing mode and the task display mode;
the acquisition and display module is used for acquiring a second task in the task display mode and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the acquisition and presentation module is specifically configured to:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
According to the second aspect, in a possible embodiment, the polyhedron is a cube or a cylinder or a pyramid, and the plurality of polyhedrons projected onto the display area are the same type.
According to a second aspect, in a possible implementation, the acquisition and presentation module is specifically configured to perform at least one of the following operations:
extracting at least part of the characters, performing attribute adaptation on the at least part of the characters and the target display area to obtain target characters, and displaying the target characters on the target display area; the attribute adaptation comprises at least one of position adaptation, color adaptation, shape adaptation and size adaptation; alternatively, the first and second electrodes may be,
extracting at least part of the picture, performing attribute adaptation on the at least part of the picture and the target display area to obtain a target picture, and displaying the target picture on the target display area; alternatively, the first and second electrodes may be,
extracting at least part of the video, performing attribute adaptation on the at least part of the video and the target display area to obtain a target video, and displaying the target video on the target display area.
According to a second aspect, in a possible implementation manner, the acquisition and presentation module is further configured to:
and performing attribute adaptation on at least two of the target characters, the target pictures and the target videos.
According to the second aspect, in a possible implementation manner, the second task is a plurality of tasks, and different target characters, different target pictures, or different target videos are used for respectively representing different second tasks; the acquisition and presentation module is specifically configured to:
respectively displaying different target characters on different surfaces of the target display area; alternatively, the first and second electrodes may be,
respectively displaying different target pictures on different surfaces of the target display area; alternatively, the first and second electrodes may be,
and respectively displaying different target videos on different surfaces of the target display area.
According to the second aspect, in one possible implementation, the second tasks represented by the target text, the target picture or the target video displayed on the same one of the target display areas have common attribute information.
According to a second aspect, in a possible implementation, the acquisition and presentation module is specifically configured to:
acquiring at least one second task from a plurality of tasks according to the income information of each task; the revenue information is related to task content for each of the tasks.
In a possible embodiment, the target text, the target picture or the target video has link attribute information according to the second aspect; the apparatus also includes a second receiving module;
the second receiving module is used for receiving a second input of the user for the target text, the target picture or the target video in the task display mode;
the second processing module is further configured to:
and controlling the electronic equipment to switch from the task display mode to the task processing mode in response to the received second input of the user.
According to the second aspect, in a possible implementation, the description information further includes an indication element, where the indication element is used to characterize identity information of a user; the acquisition and display module is further configured to:
and extracting the indication elements and displaying the indication elements in other areas of the display interface except the display area.
According to a second aspect, in a possible implementation, the description information further comprises task rating information; the acquisition and display module is further configured to:
and extracting the task evaluation information and displaying the task evaluation information on the display interface.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method for presenting description information of a task according to the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the method for presenting description information of a task according to the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a schematic diagram illustrating an execution subject of a method for presenting description information of a task provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a method for presenting description information of a task provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a first display interface for displaying description information provided by an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a second display interface for presenting descriptive information provided by an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a third display interface for presenting descriptive information provided by an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a fourth display interface for presenting descriptive information provided by an embodiment of the present disclosure;
FIG. 7 is a flow chart illustrating another method for presenting description information of a task provided by an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram illustrating an apparatus for presenting description information of a task according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of another apparatus for presenting description information of a task according to an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
With the improvement of living standard of people, various electronic devices (such as mobile phones and tablet computers) have become an indispensable part of life of people. The electronic equipment can enrich the lives of people by processing various tasks (such as playing videos), and in the actual use process, the electronic equipment can be used for processing various tasks and displaying related profiles of different tasks, so that a user can know the content of the tasks which are touched or about to be touched.
Research shows that the existing method for displaying the related brief introduction of the task is single, for example, when the brief introduction of the learning course task is displayed, the course name is usually displayed only in a list form, so that the existing display form is single and boring, and the visual experience of the user is not facilitated.
The present disclosure provides a method for displaying description information of a task, the description information including at least one of a picture, a video and a text, the task being in one of a pending state, a processing state and a processing completion state, the method comprising:
in a task processing mode in which a first task is in the in-process state, the electronic device is operated, wherein at least one task needs to be processed;
the electronic equipment is operated in the task processing mode, and first input of a user is received;
in response to the received first input, executing the first task, the executing the first task including ending the first task such that the first task is in the process complete state, and causing the electronic device to enter a task presentation mode;
under the condition that the first task is in the processing completion state, the electronic equipment enters the task display mode;
acquiring a second task, and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the displaying the description information of the second task includes:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
In the embodiment of the disclosure, the display interface comprises a plurality of display areas, and the display areas are projections of at least one polyhedron on the display interface, so that the display interface is patterned, and compared with the existing list display, the display effect of the display interface is enriched. In addition, the description information of the second task is adapted to the target display area and displayed, so that the description information is matched with the display area, and the visual experience of a user is improved.
Referring to fig. 1, a schematic diagram of an execution main body of a method for presenting description information of a task according to an embodiment of the present disclosure is shown, where the execution main body of the method is an electronic device 100, where the electronic device 100 may include a terminal and a server. For example, the method may be applied to a terminal, and the terminal may be a smart phone 10, a desktop computer 20, a notebook computer 30, and the like shown in fig. 1, and may also be a smart speaker, a smart watch, a tablet computer, and the like, which are not shown in fig. 1, without limitation. The method may also be applied to the server 40 or may be applied to an implementation environment consisting of the terminal and the server 40. The server 40 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform, and the like.
In other embodiments, the electronic device 100 may also include an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited herein.
In some embodiments, the server 40 may communicate with the smart phone 10, the desktop computer 20, and the notebook computer 30 via the network 50. Network 50 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
In addition, the method for presenting the description information of the task may be software running in the terminal or the server, such as an application program having a function of presenting the description information of the task. In some possible implementations, the method for presenting description information of a task may be implemented by a processor calling computer readable instructions stored in a memory.
In the embodiment of the disclosure, the electronic device may work in a task processing mode or a task display mode. The electronic device is capable of being operated in a task processing mode, wherein at least one task needs to be processed. In the task presentation mode, description information of the task may be presented. The description information of the task comprises at least one of pictures, videos and characters, and the task is in one of a to-be-processed state, a processing state and a processing completion state.
Illustratively, tasks include, but are not limited to, lessons, movies, songs, games, and the like. The following tasks in the embodiments of the present disclosure are explained by taking courses as examples.
Specifically, in the task processing mode, the first task is in the in-process state. For example, when a user uses an application program to learn a language (such as english), in the task processing mode, at least one task (course) is in a state to be processed, and the user needs to learn the language.
Referring to fig. 2, a flowchart of a method for presenting description information of a task according to an embodiment of the present disclosure is shown, where the method for presenting description information of a task includes the following steps S101 to S102:
s101, the electronic equipment is operated in the task processing mode, first input of a user is received, the first task is executed in response to the received first input, the execution of the first task comprises ending the first task, the first task is enabled to be in the processing completion state, and the electronic equipment is enabled to enter a task display mode.
It is understood that during the task, the user may operate the electronic device according to the user's own requirement, for example, the user may perform operations such as pause, repeat play, fast forward, rewind, or end, which are also referred to as the user first input, or a combination thereof.
In addition, the type of the first input may include a touch operation, a voice control operation, a gesture control operation, and the like for the electronic device. For example, a user may cause the electronic device to pause a current task by touching a "pause icon" on the electronic device; the user can also enable the electronic equipment to pause the current task by inputting the voice of 'please pause playing'; the user may also pause the current task by inputting a "pause gesture", where the gesture operation may be a specific gesture operation (e.g., a vertical sliding gesture, a horizontal sliding gesture) applied to a screen of the electronic device, or a specific gesture displayed while the electronic device is detached, for example, if the user displays an OK gesture operation in front of the electronic device, and a camera of the electronic device captures the gesture, the current task may be played.
Accordingly, the electronic device may perform the first task in response to receiving the user first input, the performing the first task including ending the first task such that the first task is in the process complete state.
In the following, the following description will be made of "executing the first task" by taking different tasks as examples.
In the case where the task is a course, performing the first task includes: and continuing to display the current learning content until the current course is displayed, pausing the current learning content, quickly displaying the current learning content, ending the current learning content and the like.
In the case where the task is a movie, executing the first task includes: and continuing to play the current movie until the movie is played completely, pausing the playing of the current movie, fast-forwarding the current movie, ending the playing of the current movie, and the like. The case where the task is a song is similar to the case where the task is a movie, and will not be described herein again.
In the case where the task is a game, executing the first task includes: the progress of the game is continued until the current game is cleared, the current game is paused or ended, etc., based on the user's interaction.
In this embodiment, when the first task is in the processing completion state, the electronic device enters the task display mode. That is, in the case where the current course learning is completed, the course presentation mode is entered. The first task may be in a processing completion state, where the user finishes learning the current course to automatically end the current course, or the user actively terminates learning in the learning process to end the current course.
When the first task is in the processing completion state, the electronic device enters the task display mode, and then step S102 is executed.
It should be noted that, when the first task is in the processing completion state, the electronic device may enter the task display mode in response to a first input of a user, or may automatically control the electronic device to enter the task display mode after the first task is in the processing completion state and a preset time elapses; the electronic device may also be controlled to enter the task display mode when the first task is in the processing completion state and the electronic device is detected to be in a preset state (for example, a screen locking state or a screen turning-off state), which is not limited herein.
S102, acquiring a second task, and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas; the displaying the description information of the second task includes: extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
Illustratively, the first task and the second task may be the same or different. For example, in a scenario where the task is a course, the first task corresponds to a first course, the second task corresponds to a second course, and the first course and the second course may be the same or different.
The first course and the second course are the same, which means that the course learned by the user and the displayed course are the same course. For example, in the task processing mode, the user learns the nth course, and in the task display mode, the description information of the nth course can be displayed, so that the user can correspondingly know the learned course; the first course and the second course are different, which means that the course that the user learns and the displayed course are different courses. For example, in the task processing mode, the user has learned the nth course, and in the task display mode, the description information of the (N + 1) th course may be displayed, so that the user can learn the course to be learned accordingly. In the embodiment of the present disclosure, the first task and the second task are different.
The second task may be, for example, a task to be processed, a task that has already been processed, or a combination thereof. The second task may be one or more.
Illustratively, the display interface may be a two-dimensional display interface, and may also be an AR virtual interface, a VR virtual interface, or an MR virtual interface, which is not limited herein.
Illustratively, at least one virtual object may be included in the virtual interface. The virtual object specifically refers to virtual information generated by computer simulation, and may be a virtual three-dimensional object, such as a virtual animal, a virtual plant, a virtual other object, or a virtual planar object, such as a virtual arrow, a virtual character, a virtual picture, or the like.
In the case that the display interface is a two-dimensional display interface, the projection refers to two-dimensional presentation of a three-dimensional object (polyhedron); for the virtual interface, in some implementations, the projection may mean that the left and right eyes display different images to form a 3D depth of field effect; in other embodiments, the polyhedron may be a virtual three-dimensional object, and the projection refers to at least one plane image that is adapted to the visual angle of the user at the current time, that is, the display surface of the polyhedron facing the user needs to be determined according to the pose of the user relative to the polyhedron.
Illustratively, referring to fig. 3, the display interface 90 includes a plurality of display areas 91a, 91b, and 91 c. Each display area is a projection of at least one polyhedron onto the display interface 90. For example, the display area 91a is a projection of two cubes on the display interface 90, and the display area 91b is a projection of one cube on the display interface. As can be seen from fig. 3, the projection contains at least two adjacent faces of at least one polyhedron, at least one edge L of the polyhedron in the projection being a boundary of the plurality of display regions. In the embodiment, the plurality of display areas of the display interface are displayed by the projection of at least one polyhedron, so that the display interface is patterned, the display effect of the display interface is enriched, the stereoscopic vision experience is provided for a user, and the visual experience of the user is improved.
In some embodiments, in order to improve the exhibition effect of each display region, the polyhedrons projected to the display regions are plural, and the types of the polyhedrons are the same. In addition, the heights of the plurality of polyhedrons projected to the display area may be different, and may be specifically set according to actual requirements.
Referring to fig. 4, in some embodiments, the polyhedron may also be a cylinder, and the plurality of display areas 91 of the display interface 90 are projections of at least one cylinder on the display areas 91.
Referring to fig. 5, in other embodiments, the polyhedron may also be a pyramid (such as a triangular pyramid), and the plurality of display areas 91 of the display interface 90 are projections of at least one pyramid on the display areas 91.
In some embodiments, in the process of extracting the second task, at least one second task may be obtained from a plurality of tasks according to the profit information of each task; the revenue information is related to task content for each of the tasks. Namely, at least one second course is obtained from the plurality of second courses according to the income information of each second course; the revenue information is related to the course content for each second course. Therefore, the description information of the second course with higher value can be displayed, and the user can be attracted to continuously learn the corresponding course.
Illustratively, the description information is used for describing the second task, that is, the description information is used for describing and presenting the brief description of the first course, so that the user can clearly know the course content to be learned in the future. It is understood that since the description information of the second course may be many, only a part may be extracted when the description information of the second course is extracted.
In other embodiments, a preset number of second tasks may be selected from the plurality of tasks to be processed according to the time to be processed of the task. For example, when the task is a course, the description information of a preset number of courses (for example, 5 courses) is selected from a plurality of courses to be taken for presentation according to the next course in the week. In addition, the second task can be obtained according to the characteristics of the task, for example, according to the content highlight of the course, the description information of a plurality of courses is selected from a plurality of courses to be displayed, so that attraction to the user is realized.
Therefore, for the above S102, in extracting at least a part of the description information of the second task, and adapting and displaying the description information of the at least a part of the second task and a target display area, at least one of the following is included:
extracting at least part of the characters, performing attribute adaptation on the at least part of the characters and the target display area to obtain target characters, and displaying the target characters on the target display area; the attribute adaptation comprises at least one of position adaptation, color adaptation, shape adaptation and size adaptation; alternatively, the first and second electrodes may be,
extracting at least part of the picture, performing attribute adaptation on the at least part of the picture and the target display area to obtain a target picture, and displaying the target picture on the target display area; alternatively, the first and second electrodes may be,
extracting at least part of the video, performing attribute adaptation on the at least part of the video and the target display area to obtain a target video, and displaying the target video on the target display area.
The characters in the present embodiment are not limited to characters, and include letters, numbers, and operation symbols.
Referring to fig. 6, in some embodiments, the picture displayed on each face of the polyhedron may be a cover page of the second course, the text may be a name of the course and a price of the course, and the video may be a corresponding animation for explaining the course. Therefore, in the display process, the images, the characters or the videos are respectively matched with the attributes of the target display area, so that the finally displayed images, characters or videos are matched with the target display area, and the visual experience of a user is further improved.
In addition, it should be noted that, in the process of displaying, a plurality of target areas may be selected from the plurality of display areas according to the number of the second tasks and the amount of the description information, and different surfaces in the target areas may be selected for displaying.
In some embodiments, in the process of adapting the description information of at least part of the second task to a target display area, at least two of the target text, the target picture and the target video are also subjected to the attribute adaptation. In particular, attribute adaptation of the target text and the target picture on the same display face of the same polyhedron in the region of the target may be included, for example, attribute adaptation between the course name and the course cover F in fig. 6; and the method can also comprise the attribute adaptation between target characters between adjacent display surfaces on the same polyhedron, the attribute adaptation between target pictures or the attribute adaptation between target videos. For example, the property adaptation between the course name 1 and the course name 2. Therefore, the display effects on the same display area can be matched and are not too obtrusive, and the visual effect is further improved.
In some embodiments, in the case that the number of the second tasks is plural, different target characters may be respectively displayed on different faces on the target display area; or respectively displaying different target pictures on different surfaces of the target display area; or the different target videos are respectively displayed on different surfaces of the target display area. Therefore, the description information of different second tasks can be displayed on different faces respectively, so that the different tasks can be distinguished through different faces, and the user can browse conveniently.
In addition, in order to facilitate the user to understand and comb the relationship among the plurality of second tasks, the second tasks represented by the target characters, the target pictures or the target videos displayed on the same display area in the target display area have common attribute information. For example, in fig. 6, the second lesson a, the second lesson b and the second lesson c displayed on the display area 91a may be lessons of the same unit. Therefore, the user can clearly know the course conditions under different units.
In some embodiments, for convenience of user operation, the target text, the target picture, or the target video has link attribute information. Referring to fig. 7, a flowchart of another method for presenting description information of a task provided for an embodiment of the present disclosure is different from the method in fig. 2, that step S102 further includes the following steps:
s103, the electronic equipment is operated in the task display mode, and second input of the user for the target characters, the target pictures or the target videos is received.
S104, responding to the received second input of the user, and controlling the electronic equipment to be switched from the task display mode to the task processing mode.
In this embodiment, since the target text, the target picture, or the target video has link attribute information, the task processing mode can be directly entered through the link, for example, after the user clicks the "second course a" in fig. 6, the task processing mode can be entered, and the user starts to learn the related content of the second course a.
Referring again to fig. 6, in some embodiments, the description information further includes an indication element, and the indication element is used to characterize identity information of the user, and the method further includes: and extracting the indication elements and displaying the indication elements in other areas of the display interface except the display area. In this way, the indication elements for representing the identity information of the user are displayed, and the interaction relationship (such as exploration and learning) between the user and the task can be revealed, so that the curiosity of the user is stimulated and the shared situation of the user is aroused. The indication element may be a user's avatar, nickname, or the like.
In other embodiments, the description information further includes task evaluation information; the method further comprises the following steps: and extracting the task evaluation information and displaying the task evaluation information on the display interface. Therefore, the user can clearly know the general view of the task, and the time spent by the user for searching the related information is saved. For example, evaluation information of "striving for learning for 7 days again, which is equivalent to more earning 300 yuan" may be presented, so that the user can clearly know the value of the course to be learned in the next 7 days.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, the embodiment of the present disclosure further provides a device for displaying description information of a task corresponding to the method for displaying description information of a task, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the method for displaying description information of a task in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 8, a schematic diagram of an apparatus 500 for displaying description information of a task according to an embodiment of the present disclosure, where the description information includes at least one of a picture, a video, and a text, the task is in one of a pending state, a processing state, and a processing completion state, and the apparatus includes:
a first processing module 501, configured to process at least one task in a task processing mode, where a first task is in the processing state in the task processing mode;
a first receiving module 502, configured to receive a first input of a user in the task processing mode;
a task execution module 503, configured to execute the first task in response to the received first input, where the executing the first task includes ending the first task, so that the first task is in the processing completed state;
a second processing module 504, configured to control the electronic device to switch between the task processing mode and the task displaying mode;
an obtaining and displaying module 505, configured to obtain a second task, and display the description information of the second task through a display interface of the electronic device; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the acquisition and presentation module 505 is specifically configured to:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
In a possible embodiment, the polyhedron is a cube or a cylinder or a pyramid, and the plurality of polyhedrons projected onto the display area are the same type.
In a possible implementation, the obtaining and presenting module 505 is specifically configured to perform at least one of the following operations:
extracting at least part of the characters, performing attribute adaptation on the at least part of the characters and the target display area to obtain target characters, and displaying the target characters on the target display area; the attribute adaptation comprises at least one of position adaptation, color adaptation, shape adaptation and size adaptation; alternatively, the first and second electrodes may be,
extracting at least part of the picture, performing attribute adaptation on the at least part of the picture and the target display area to obtain a target picture, and displaying the target picture on the target display area; alternatively, the first and second electrodes may be,
extracting at least part of the video, performing attribute adaptation on the at least part of the video and the target display area to obtain a target video, and displaying the target video on the target display area.
In a possible implementation, the obtaining and presenting module 505 is further configured to:
and performing attribute adaptation on at least two of the target characters, the target pictures and the target videos.
In a possible implementation manner, the second task is multiple, and different target characters, different target pictures, or different target videos are used for respectively representing different second tasks; the acquisition and presentation module 505 is specifically configured to:
respectively displaying different target characters on different surfaces of the target display area; alternatively, the first and second electrodes may be,
respectively displaying different target pictures on different surfaces of the target display area; alternatively, the first and second electrodes may be,
and respectively displaying different target videos on different surfaces of the target display area.
In one possible embodiment, the second task represented by the target text, the target picture or the target video displayed on the same display area in the target display area has common attribute information.
In one possible implementation, the obtaining and presenting module 505 is specifically configured to:
acquiring at least one second task from a plurality of tasks according to the income information of each task; the revenue information is related to task content for each of the tasks.
Referring to fig. 9, in a possible implementation, the target text, the target picture, or the target video has link attribute information; the apparatus further comprises a second receiving module 506;
the second receiving module 506 is configured to receive, in the task display mode, a second input of the user for the target text, the target picture, or the target video;
the second processing module 503 is further configured to:
and controlling the electronic equipment to switch from the task display mode to the task processing mode in response to the received second input of the user.
In a possible implementation, the description information further includes an indication element, and the indication element is used for characterizing identity information of the user; the acquisition and display module 505 is further configured to:
and extracting the indication elements and displaying the indication elements in other areas of the display interface except the display area.
In a possible embodiment, the description information further includes task evaluation information; the acquisition and display module 505 is further configured to:
and extracting the task evaluation information and displaying the task evaluation information on the display interface.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 10, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, thereby executing the method described in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method for presenting description information of a task in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute steps of the method for displaying description information of a task in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A method for displaying description information of a task, the description information including at least one of a picture, a video and a text, the task being in one of a pending state, a processing state and a processing completion state, the method comprising:
in a task processing mode in which a first task is in the in-process state, the electronic device is operated, wherein at least one task needs to be processed;
the electronic equipment is operated in the task processing mode, and first input of a user is received;
in response to the received first input, executing the first task, the executing the first task including ending the first task such that the first task is in the process complete state, and causing the electronic device to enter a task presentation mode;
under the condition that the first task is in the processing completion state, the electronic equipment enters the task display mode;
acquiring a second task, and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the displaying the description information of the second task includes:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
2. The method according to claim 1, wherein the polyhedron is a cube or a cylinder or a pyramid, and the polyhedron projected to the display region is plural, and the types of the polyhedron are the same.
3. The method of claim 1, wherein the extracting at least a portion of the description information of the second task, adapting and displaying the at least a portion of the description information of the second task with a target display area comprises at least one of:
extracting at least part of the characters, performing attribute adaptation on the at least part of the characters and the target display area to obtain target characters, and displaying the target characters on the target display area; the attribute adaptation comprises at least one of position adaptation, color adaptation, shape adaptation and size adaptation; alternatively, the first and second electrodes may be,
extracting at least part of the picture, performing attribute adaptation on the at least part of the picture and the target display area to obtain a target picture, and displaying the target picture on the target display area; alternatively, the first and second electrodes may be,
extracting at least part of the video, performing attribute adaptation on the at least part of the video and the target display area to obtain a target video, and displaying the target video on the target display area.
4. The method of claim 3, wherein the extracting at least a portion of the description information of the second task, adapting and displaying the at least a portion of the description information of the second task with a target display area, further comprises:
and performing attribute adaptation on at least two of the target characters, the target pictures and the target videos.
5. The method according to claim 3, wherein the second task is a plurality of tasks, and different target texts, different target pictures or different target videos are used for respectively representing different second tasks;
the displaying the target text on the target display area comprises:
respectively displaying different target characters on different surfaces of the target display area;
the displaying the target picture on the target display area includes:
respectively displaying different target pictures on different surfaces of the target display area;
displaying the target video on the target display area, including:
and respectively displaying different target videos on different surfaces of the target display area.
6. The method of claim 5, wherein the second tasks represented by the target text, the target picture, or the target video displayed on a same one of the target display areas have common attribute information.
7. The method of claim 1, wherein obtaining the second task comprises:
acquiring at least one second task from a plurality of tasks according to the income information of each task; the revenue information is related to task content for each of the tasks.
8. The method of claim 3, wherein the target text, the target picture, or the target video has link attribute information; the method further comprises the following steps:
the electronic equipment is operated in the task display mode, and second input of a user for the target characters, the target pictures or the target videos is received;
and controlling the electronic equipment to switch from the task display mode to the task processing mode in response to the received second input of the user.
9. The method according to claim 1, wherein the description information further comprises an indication element, the indication element being used for characterizing identity information of a user; the method further comprises the following steps:
and extracting the indication elements and displaying the indication elements in other areas of the display interface except the display area.
10. The method of claim 1, wherein the descriptive information further includes task rating information; the method further comprises the following steps:
and extracting the task evaluation information and displaying the task evaluation information on the display interface.
11. An apparatus for displaying description information of a task, the description information including at least one of a picture, a video, and a text, the task being in one of a pending state, a processing state, and a processing completion state, the apparatus comprising:
a first processing module, configured to process at least one task in a task processing mode, where a first task is in the processing state;
the first receiving module is used for receiving a first input of a user in the task processing mode;
a task execution module to execute the first task in response to the received first input, the executing the first task including ending the first task such that the first task is in the process complete state;
the second processing module is used for controlling the electronic equipment to switch between the task processing mode and the task display mode;
the acquisition and display module is used for acquiring a second task in the task display mode and displaying the description information of the second task through a display interface of the electronic equipment; the display interface comprises a plurality of display areas, the display areas are projections of at least one polyhedron on the display interface, the projections comprise at least two adjacent surfaces of at least one polyhedron, and at least one edge of the polyhedron in the projections is used as a boundary of the plurality of display areas;
the acquisition and presentation module is specifically configured to:
extracting at least part of the description information of the second task, and adapting and displaying the at least part of the description information of the second task and a target display area, wherein the target display area is one or more of the plurality of display areas.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the method for presenting description information of a task according to any one of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for presenting descriptive information of a task according to any one of claims 1 to 10.
CN202110313791.3A 2021-03-24 2021-03-24 Method and device for displaying description information of task and electronic equipment Active CN113031846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110313791.3A CN113031846B (en) 2021-03-24 2021-03-24 Method and device for displaying description information of task and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110313791.3A CN113031846B (en) 2021-03-24 2021-03-24 Method and device for displaying description information of task and electronic equipment

Publications (2)

Publication Number Publication Date
CN113031846A CN113031846A (en) 2021-06-25
CN113031846B true CN113031846B (en) 2022-04-26

Family

ID=76473429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110313791.3A Active CN113031846B (en) 2021-03-24 2021-03-24 Method and device for displaying description information of task and electronic equipment

Country Status (1)

Country Link
CN (1) CN113031846B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115456582A (en) * 2022-09-16 2022-12-09 汉桑(南京)科技股份有限公司 Task management method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110064044A (en) * 2009-12-07 2011-06-15 엘지전자 주식회사 Mobile terminal and method for controlling application of the same
CN102640106A (en) * 2009-06-04 2012-08-15 美尔默公司 Displaying multi-dimensional data using a rotatable object
CN103941954A (en) * 2013-01-17 2014-07-23 腾讯科技(深圳)有限公司 Method and device for displaying interfaces and method and device for user interface interaction
CN112148177A (en) * 2020-09-30 2020-12-29 维沃移动通信有限公司 Background task display method and device, electronic equipment and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150160824A1 (en) * 2013-11-12 2015-06-11 Cubed, Inc. Systems and method for mobile social network interactions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102640106A (en) * 2009-06-04 2012-08-15 美尔默公司 Displaying multi-dimensional data using a rotatable object
KR20110064044A (en) * 2009-12-07 2011-06-15 엘지전자 주식회사 Mobile terminal and method for controlling application of the same
CN103941954A (en) * 2013-01-17 2014-07-23 腾讯科技(深圳)有限公司 Method and device for displaying interfaces and method and device for user interface interaction
CN112148177A (en) * 2020-09-30 2020-12-29 维沃移动通信有限公司 Background task display method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113031846A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
US10068134B2 (en) Identification of objects in a scene using gaze tracking techniques
KR20210046591A (en) Augmented reality data presentation method, device, electronic device and storage medium
US20220319139A1 (en) Multi-endpoint mixed-reality meetings
US20150097767A1 (en) System for virtual experience book and method thereof
US11561675B2 (en) Method and apparatus for visualization of public welfare activities
WO2018000608A1 (en) Method for sharing panoramic image in virtual reality system, and electronic device
CN109905592A (en) According to the interactive controlling of user or the providing method and device of the content of synthesis
CN113301506B (en) Information sharing method, device, electronic equipment and medium
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN111651057A (en) Data display method and device, electronic equipment and storage medium
WO2018000620A1 (en) Method and apparatus for data presentation, virtual reality device, and play controller
CN113031846B (en) Method and device for displaying description information of task and electronic equipment
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111770384A (en) Video switching method and device, electronic equipment and storage medium
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN111464859B (en) Method and device for online video display, computer equipment and storage medium
CN113318428A (en) Game display control method, non-volatile storage medium, and electronic device
KR102445530B1 (en) Method and apparatus for visualization of public welfare activities
CN111599292A (en) Historical scene presenting method and device, electronic equipment and storage medium
TWI514319B (en) Methods and systems for editing data using virtual objects, and related computer program products
CN115599206A (en) Display control method, display control device, head-mounted display equipment and medium
CN111625103A (en) Sculpture display method and device, electronic equipment and storage medium
CN114067084A (en) Image display method and device
CN217612860U (en) Immersive virtual display system based on LED display screen
CN112966143A (en) Task additional information collection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.