CN116701685A - Man-machine interaction method and device and electronic equipment - Google Patents

Man-machine interaction method and device and electronic equipment Download PDF

Info

Publication number
CN116701685A
CN116701685A CN202210193746.3A CN202210193746A CN116701685A CN 116701685 A CN116701685 A CN 116701685A CN 202210193746 A CN202210193746 A CN 202210193746A CN 116701685 A CN116701685 A CN 116701685A
Authority
CN
China
Prior art keywords
album
photo
target
video
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210193746.3A
Other languages
Chinese (zh)
Inventor
韩笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210193746.3A priority Critical patent/CN116701685A/en
Publication of CN116701685A publication Critical patent/CN116701685A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a man-machine interaction method, a device and electronic equipment, wherein at least one photo is associated with a video, the photo is generated based on the video, the photo is contained in a first photo album in a gallery application program, in the method, a mobile control is displayed in response to the operation of a user on a thumbnail of a target photo displayed in the first photo album or a photo tab, when the thumbnail of the target photo is displayed in the photo tab, the photo in the first photo album is contained in the photo tab, and the target photo is contained in the at least one photo; responding to the operation of a user to move a control, and displaying the identification of the movable album; responding to the operation of the user on the identification of the target album in the identifications of the movable albums, and moving the target photos to the target album; and displaying first prompt information, wherein the first prompt information is used for indicating that the target photo is moved to the target album. In the embodiment of the application, in the scene of generating the related photo by the video, the photo movement can be realized, and the user can perceive the photo movement.

Description

Man-machine interaction method and device and electronic equipment
Technical Field
The embodiment of the application relates to a man-machine interaction technology, in particular to a man-machine interaction method, a man-machine interaction device and electronic equipment.
Background
The user can record life by taking pictures or recording video, which can record more highlights than photos, which are sometimes difficult to capture. Thus, when a user records life using a video, rich contents can be recorded in the video.
In some shooting scenes, users expect that photos can be extracted from captured original videos, and further processing modes of the original videos and the photos are still to be improved.
Disclosure of Invention
The embodiment of the application provides a man-machine interaction method, a man-machine interaction device and electronic equipment, wherein in a scene of generating related photos through videos, the photos can be moved.
In a first aspect, an embodiment of the present application provides a human-computer interaction method, where an execution body for executing the human-computer interaction method may be a terminal or a chip in the terminal. In the method, in a scene in which an associated photo is generated based on an original video, movement and copying of the video and/or photo can be realized. Wherein the video in the gallery application is associated with at least one photo, the at least one photo generated based on the video, the at least one photo contained in a first album in the gallery application, the method comprising: displaying a movement control in response to a user operation on a thumbnail of a target photo displayed in the first photo album or the photo tab, wherein the photo in the first photo album is contained in the photo tab and the target photo is contained in the at least one photo when the thumbnail of the target photo is displayed in the photo tab; responding to the user operation of the mobile control, and displaying the identification of the movable album; responding to the operation of the user on the identification of the target album in the identification of the movable album, and moving the target photo to the target album; and displaying first prompt information, wherein the first prompt information is used for indicating that the target photo is moved to the target album.
In one possible implementation, the operation of the thumbnail of the target photo is used to trigger the terminal to display the target photo.
The displaying the mobile control comprises: and displaying a first interface, wherein the target photo is displayed on the first interface, and a first operation area comprises the mobile control.
In one possible implementation, the operation of the thumbnail of the target photograph is used to select the target photograph; the displaying the mobile control comprises: and displaying a second operation area, wherein the second operation area comprises the mobile control.
In one possible implementation, the second operating region is different from the first operating region.
In one possible implementation manner, the displaying the first prompt information includes: when the user operates the thumbnail of the target photo displayed in the first album, displaying the first prompt information according to the type of the first album; or when the user operates the thumbnail of the target photo displayed in the photo tab, displaying the first prompt information according to whether gallery data in the target album is contained in the photo tab.
In one possible implementation manner, the displaying the first prompt information according to the type of the first album includes: when the first album is an entity album, displaying the next content arranged behind the target photo in the first album, and displaying the first prompt information; and when the first album is a virtual album, displaying the target photo and displaying the first prompt information.
In one possible implementation manner, the displaying the first prompt information according to whether gallery data in the target album is included in the photo tab includes: when the gallery data in the target album is contained in the photo tab, displaying the target photo and displaying the first prompt information; and when the gallery data in the target album is not contained in the photo tab, displaying the next content arranged behind the target photo in the photo tab, and displaying the first prompt information.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the movable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the movable album includes all physical albums; or,
When the user operates the thumbnail of the target photo displayed in the photo tab, the movable photo album contains all entity photo albums.
In one possible implementation, when thumbnails of the at least one photo-associated video are displayed in the first album and the photo tab, the method further includes: in response to the user's operation on the thumbnail of the video displayed in the first album or the photo tab, displaying a first interface, a display area of which displays a screen of the video, the first interface further comprising: an association region including an identification of the video and an identification of at least one photograph associated with the video, the at least one photograph generated based on the video, and a first operational region including a movement control therein; displaying the target photo in the display area in response to an operation of the user on the first interface for identification of the target photo; responding to the user operation of the mobile control, and displaying the identification of the movable album; moving the target photo to the target album in response to the user's operation of the identification of the target album; and displaying first prompt information on a first interface on which the target photo is displayed, wherein the first prompt information is used for indicating that the target photo has moved to the target album.
In a possible implementation manner, the target album is a first target album, the first operation area further includes a copy control, and the method further includes: when the display area displays the picture of the video, responding to the user to operate the copy control, and displaying the identification of the replicable album; copying the video to a second target album in response to the user operating the identification of the second target album in the replicable albums; and displaying second prompt information, wherein the second prompt information is used for indicating that the video is copied to the second target album.
In one possible implementation, the method further includes: and responding to the operation of triggering the terminal to display the video in the second target album by the user, and displaying the first interface.
In a possible implementation manner, the target album is a first target album, the first operation area further includes a copy control, and the method further includes: when the display area displays the target photo, responding to the user to operate the copy control, and displaying the identification of the copy album; copying the target photo to a second target album in response to the user operating the identification of the second target album in the replicable albums; and displaying second prompt information, wherein the second prompt information is used for indicating that the target photo is copied to the second target album.
In one possible implementation, the method further includes: and responding to the operation of triggering the terminal to display the target photo in the second target photo album by the user, displaying the target photo, wherein the association area is not included on the interface of the second target photo album, on which the target photo is displayed.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the replicable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the replicable album includes all physical albums; or,
when the user operates on the thumbnail of the video displayed in the photo tab, the replicable album contains all entity albums.
In a possible implementation manner, the first operation area includes more controls, and before the responding to the user operating the mobile control, the method further includes: and in response to the user operating the more controls, displaying at least one operable control, wherein the at least one operable control comprises the mobile control and the copy control.
In a second aspect, an embodiment of the present application provides a human-computer interaction device, where the human-computer interaction device may be a terminal or a chip in the terminal in the first aspect, and the human-computer interaction device includes:
a display module for displaying a movement control in response to a user's operation on a thumbnail of a target photo displayed in the first album or the photo tab, wherein the photo in the first album is contained in the photo tab and the target photo is contained in the at least one photo when the thumbnail of the target photo is displayed in the photo tab; and displaying an identification of the movable album in response to the user operating the movement control.
And the processing module is used for responding to the operation of the user on the identification of the target album in the identification of the movable album and moving the target photo to the target album.
The display module is further used for displaying first prompt information, and the first prompt information is used for indicating that the target photo is moved to the target album.
In one possible implementation, the operation of the thumbnail of the target photo is used to trigger the terminal to display the target photo. The display module is specifically configured to display a first interface, where the first interface displays the target photo, and a first operation area, where the first operation area includes the movement control.
In one possible implementation, the operation of the thumbnail of the target photograph is used to select the target photograph. The display module is specifically configured to display a second operation area, where the second operation area includes the movement control.
In one possible implementation, the second operating region is different from the first operating region.
In one possible implementation, the display module is specifically configured to:
when the user operates the thumbnail of the target photo displayed in the first album, displaying the first prompt information according to the type of the first album; or,
and when the user operates the thumbnail of the target photo displayed in the photo tab, displaying the first prompt information according to whether gallery data in the target album is contained in the photo tab.
In a possible implementation manner, when the first album is an entity album, the display module is specifically configured to display a next content arranged behind the target photo in the first album, and display the first prompt information. When the first album is a virtual album, a display module is specifically configured to display the target photo and display the first prompt message.
In one possible implementation manner, when gallery data in the target album is included in the photo tab, a display module is specifically configured to display the target photo and display the first prompt information; when the gallery data in the target album is not contained in the photo tab, the display module is specifically configured to display the next content arranged behind the target photo in the photo tab, and display the first prompt message.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the movable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the movable album includes all physical albums; or,
when the user operates the thumbnail of the target photo displayed in the photo tab, the movable photo album contains all entity photo albums.
In one possible implementation manner, when thumbnails of the videos associated with the at least one photo are displayed in the first album and the photo tab, the display module is further configured to display a first interface in response to an operation of the user on the thumbnails of the videos displayed in the first album or the photo tab, a display area of the first interface displays a picture of the video, and the first interface further includes: an association region including an identification of the video and an identification of at least one photograph associated with the video, the at least one photograph generated based on the video, and a first operational region including a movement control therein; displaying the target photo in the display area in response to the user operating the identification of the target photo on the first interface, and displaying the identification of the movable album in response to the user operating the movement control.
And the processing module is also used for responding to the operation of the user on the identification of the target album and moving the target photo to the target album.
And the display module is also used for displaying first prompt information on a first interface on which the target photo is displayed, wherein the first prompt information is used for indicating that the target photo is moved to the target album.
In one possible implementation manner, the target album is a first target album, the first operation area further includes a copy control, and the display module is further configured to, when the display area displays the frame of the video, respond to the user operating the copy control to display an identifier of the replicable album.
And the processing module is also used for copying the video to the second target album in response to the operation of the user on the identification of the second target album in the identification of the replicable album.
The display module is further used for displaying second prompt information, and the second prompt information is used for indicating that the video is copied to the second target album.
In a possible implementation manner, the display module is further configured to display the first interface in response to an operation of triggering, by the user, the terminal to display the video in the second target album.
In one possible implementation manner, the target album is a first target album, the first operation area further includes a copy control, and when the display area displays the target photo, the copy control is operated by the user, and the display module is specifically configured to display an identifier of the replicable album.
And the processing module is particularly used for copying the target photo to the second target album in response to the operation of the user on the identification of the second target album in the identification of the replicable album.
The display module is further used for displaying second prompt information, and the second prompt information is used for indicating that the target photo is copied to the second target album.
In a possible implementation manner, the display module is further configured to display the target photo in response to the user triggering an operation of displaying the target photo in the second target album by the terminal, where the association area is not included on an interface in which the target photo is displayed in the second target album.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the replicable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the replicable album includes all physical albums; or,
When the user operates on the thumbnail of the video displayed in the photo tab, the replicable album contains all entity albums.
In a possible implementation manner, the display module is further configured to include more controls in the first operation area, and the display module is further configured to display at least one operable control in response to the user operating the more controls, where the at least one operable control includes the mobile control and the copy control.
In a third aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory. The memory is for storing computer executable program code, the program code comprising instructions; the instructions, when executed by a processor, cause the electronic device to perform the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect described above.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of the first aspect described above.
The advantages of each of the foregoing possible implementation manners of the second aspect to the fifth aspect may be referred to as the advantages brought by the foregoing first aspect and the following embodiments, and are not described herein.
Drawings
FIG. 1 is a schematic diagram of an interface of a one-record-multiple-function according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface for shooting using a one-record-multiple-function according to an embodiment of the present application;
fig. 3A is an interface schematic diagram of a terminal according to an embodiment of the present application;
fig. 3B is another interface schematic diagram of a terminal according to an embodiment of the present application;
FIG. 4A is a schematic diagram of an interface of gallery data in a mobile entity album according to the prior art;
FIG. 4B is a diagram of another interface of gallery data in a prior art mobile entity album;
FIG. 4C is a flow chart of the prior art mobile gallery data;
FIG. 5A is a schematic diagram of an interface of gallery data in a prior art mobile virtual album;
FIG. 5B is a schematic diagram of another interface of gallery data in a prior art mobile virtual album;
FIG. 5C is a schematic diagram of an interface of gallery data in a mobile photo tab of the prior art;
FIG. 6A is a schematic diagram of an interface of gallery data in a mobile entity album according to an embodiment of the present application;
FIG. 6B is a diagram illustrating another interface of gallery data in a mobile entity album according to an embodiment of the present application;
FIG. 6C is a flowchart of mobile gallery data according to an embodiment of the present application;
FIG. 7A is a schematic diagram of an interface of gallery data in a mobile virtual album according to an embodiment of the present application;
FIG. 7B is a schematic diagram of another interface of gallery data in a mobile virtual album according to an embodiment of the present application;
FIG. 8A is a schematic diagram of an interface of gallery data in a mobile photo tab according to an embodiment of the present application;
FIG. 8B is a schematic diagram of another interface of gallery data in a mobile photo tab according to an embodiment of the present application;
FIG. 9A is a schematic diagram of an interface of a mobile AI photo according to an embodiment of the application;
FIG. 9B is a schematic diagram of an interface of a replication source video according to an embodiment of the present application;
FIG. 9C is a schematic diagram of another interface for copying AI photos according to an embodiment of the application;
FIG. 10 is a schematic flow chart of a man-machine interaction method according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a man-machine interaction device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The man-machine interaction method disclosed by the embodiment of the application is applied to terminals, wherein the terminals comprise, but are not limited to, mobile phones, tablet computers, desktop, laptop, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, personal digital assistants (Personal Digital Assistant, PDA), wearable electronic equipment, smart watches and other electronic equipment with cameras. The embodiment of the application does not limit the form of the terminal.
Before describing embodiments of the present application, some terms or concepts related to the embodiments of the present application will be explained first. It is to be understood that the terminology which follows is not intended to be limiting. Other designations are possible for the following terms. The renamed terms still satisfy the following related term explanations.
The term definitions of the present application:
original video: and shooting the obtained video by the user through the terminal.
A highlight moment (MM) refers to some highlight instants during video recording (or shooting). For example, the MM may be the best moment of motion, the best moment of expression or the best punch action. It will be appreciated that the present application is not limited to the term MM, and that MM may also be referred to as a wonderful moment, a definitive moment, a wonderful key frame or Best Shot (BS), etc. The highlight moment may be different types of picture instants in different scenes. For example, when recording a video of a football match, the highlight moment may be the moment when the player's foot contacts the football, the highlight moment may also be the moment when the football flies into the goal. When recording a video of a person jumping from the ground, the highlight moment may be the moment when the person is at the highest point in the air, or may be the moment when the person is moving most in the air.
MM TAG (TAG), a kind of time TAG, is used to indicate the position of the highlight in the recorded video. For example, a video may correspond to one or more MM tags, which may indicate that at times 10 seconds, 1 minute 20 seconds, etc. of the video, the corresponding image frame (or video frame) in the video is a highlight time.
Artificial intelligence (artificial intelligence, AI) photograph: video frames corresponding to the highlight moment in the original video. In one embodiment, the AI photograph may be referred to as a highlight instant photograph. For example, the AI photograph may be: the original video contains a video frame of laughing of the user, or contains the highest video frame of jumping of the user, or contains a video frame of looking back of the user.
In one embodiment, AI photos may be acquired and saved during the user's video capture using the terminal.
One record of functions: it can be understood that when the user uses the camera application in the terminal to shoot the video, by pressing the shooting control once, the functions of the shot original video, one or more photos at the wonderful moment, and the AI profile can be obtained.
The AI profile is obtained based on the original video, and in one embodiment, information such as a scene of the video shot by the user, a highlight key frame, etc. is recorded in the AI profile, for example, the AI profile may include: identification of at least one scene in the original video, and identification of key frames (e.g., identification of AI photos). Illustratively, the original video includes scene 1 "ocean" and scene 2 "casino", and identification 1 of key frames in scene 1 "ocean", and identification 2 of key frames in scene 2 "ocean". For another example, the information recorded in the AI profile may specifically be MM tags, video overall scene tags, segment scene tags, mirror tags, tags of good quality video frames, and the like. In one embodiment, a tag may be understood as an identification.
The video overall scene tag can also be called as a video overall abstract tag, is mainly used for indicating shooting scenes of the video overall, and is not limited by terms of the video overall scene tag. For example, the video overall scene tag may be a food tag, a night scene tag, or the like.
The implementation process of the transcription and the acquisition can be as follows: the highlight moment is automatically identified in the video recording process, the snapshot is triggered, the picture of the MM is obtained, and after the recording is finished, the picture of the MM at the highlight moment and the highlight short video (or called the highlight short video, or the highlight video, or the AI video) can be recommended to the user when the user views the recorded video. It will be appreciated that the duration of the highlight short video obtained by the one-shot is less than the duration of the entire complete video. For example, a full video recording for 1 minute may result in 4 photos of the highlight and a highlight for 15 seconds. The list can also have other names, such as one-key-multiple-beat, one-key-out, one-key-large-piece, AI one-key-large-piece, etc.
AI video: and generating based on the AI configuration file. For example, AI video may be: videos of 5s before and after the key frame of the mark 1 and videos of 5s before and after the key frame of the mark 2, namely, videos of 10s AI video. Alternatively, the AI video may be, for example: video containing 5s in scene 1 "sea", and video containing 5s in scene 2 "casino".
In one embodiment, after the capture of the video is completed, the AI video may not be generated but a saved AI video profile may be generated, and the storage space may be saved without the user browsing the AI video. In one embodiment, the user may trigger the terminal to generate the AI video on the playing interface of the original video. For example, the user may operate the one-touch tab control 351 displayed on c in fig. 3B to trigger the terminal to generate AI video, as described with respect to fig. 3B.
The process of generating the AI photo and the AI configuration file from the original video and generating the AI video from the AI configuration file is not repeated.
In the embodiment of the application, the operation mode of the user on the objects such as the control, the icon and the like in the graphical user interface (graphical user interface, GUI) comprises, but is not limited to, touching a specific control on a mobile phone screen, pressing a specific physical key or key combination, inputting voice and isolating gestures. In the following embodiments, a control in a GUI is described as an example of user operation.
For example, referring to fig. 1, taking a terminal as an example of a mobile phone, in an embodiment of the present application, a process of recording video using a "multiple recording" mode (or function) includes:
As shown in fig. 1 a, the screen of the mobile phone displays a main interface, which displays application icons of various applications (apps). The user can trigger the mobile phone to open a camera application (hereinafter referred to as a camera) by clicking on the "camera" application icon 101 on the mobile phone desktop, and the mobile phone displays a shooting interface as shown by b in fig. 1.
As shown in b in fig. 1, the shooting interface of the camera generally includes a viewfinder 102, a shooting control, a video recording control, a setting control 103, and other controls (such as a portrait function control, a night scene function control, or other controls). The user can start the recording mode by clicking the recording control, and the mobile phone can display a recording interface as shown in c in fig. 1. The user can enter the camera's setup interface by clicking on the "setup" control 103, and the handset displays an interface as shown by d in fig. 1.
An option 104 to turn on "one-record-multiple-function" is displayed in the interface shown as d in fig. 1. That is, when the user opens more than one record, the mobile phone will obtain the original video, AI photo and AI configuration file when recording the video. Of course, the user may also turn off the one-record-multiple-function through the option 104.
In one embodiment, the setup interface shown as d in FIG. 1 may also include a minimum time limit control. The control set by the minimum time is used for limiting the minimum recording time length capable of starting the one-record-multiple-function, and if the recording time length of the video is smaller than the minimum recording time length, the one-record-multiple-function characteristic of the video cannot be recalled. For example, the minimum time limit may be set to 15s, and when the user takes less than 15s, no more photos are recalled.
It will be appreciated that the setting interface shown by d in fig. 1 may also include other controls regarding video settings, such as a video resolution setting control, a video frame rate setting control, etc., and that the controls shown by d in fig. 1 are merely exemplary descriptions.
After the one-recording-multiple-function is started, the user clicks the return control 105 shown in d in fig. 1, and the mobile phone can return to the interface shown in c in fig. 1 again, and then clicks the record control 106 to record a video. In other embodiments, the one-record-multiple option is default on, and the user may not need to click on the setting control 103 shown in c in fig. 1, but directly click on the recording control 106 shown in c in fig. 1, so that recording can be performed in the one-record-multiple mode.
Fig. 2 is a schematic diagram of an interface using a one-record-multiple-function shooting according to an embodiment of the present application. Fig. 2 a shows a picture (e.g. a picture at 11 seconds) during recording by the terminal using the one-recording-multiple-function. The interface a shown in fig. 2 includes a recording stop control 107, a recording pause control 108, and a photographing key 109. During recording, the user may click the photographing key 109 to manually grasp a photograph. After the user clicks the record stop control 107 in a of fig. 2, the recording process is finished, at this time, the mobile phone interface displays an interface shown as b in fig. 2, the interface shown as b in fig. 2 is a preview interface after the recording is finished, a bubble window 110 may be popped up in the interface, and the content displayed in the window 110 is: "photos of the highlight have been generated to a great extent" to prompt the user to view the AI photos.
The preview 111 in b in fig. 2 is a thumbnail of the recorded original video. If the user clicks on the preview 111, he can enter a gallery application (or referred to as a gallery, gallery application program) to view the recorded original video, and AI photos, etc., and the playing, displaying of the original video, and displaying of the AI photos can be described with reference to the following embodiments.
It should be understood that the original video shot by the terminal may be stored in the gallery application of the terminal, and the AI photos and AI videos may also be stored in the gallery application. In one embodiment, the videos and photos stored in the gallery application may be referred to as gallery data. In one embodiment, gallery data may be displayed in the form of an album, in the form of a photo tab (tab), or the like. For example, a in fig. 3A is an interface for displaying gallery data in the form of an album in a gallery application provided in an embodiment of the present application, where the interface may include an album control 31, a photo control 32, and at least one album 33.
The gallery application program referring to fig. 3A includes at least one album, such as a camera album, all photo albums, a video album, a screen capturing album, a collection album, and other albums added by user definition. In one embodiment, albums in a gallery may be divided into physical albums and virtual albums. The photo album, the screen capturing album and other albums used for self-defined addition are entity albums, and all photo albums and video albums are virtual albums. It should be understood that an entity album can be understood as: the path actually exists for storing gallery data. A virtual album can be understood as: album for grouping video and photo together according to the characteristics of video and photo. For example, a video, as compared to a photo, contains a plurality of video frames and has dynamic characteristics, so that videos taken by a terminal can be aggregated in a "video" album, which is a virtual album.
In one embodiment, AI photos generated from an original video may be stored separately in a physical album, such as an "AI photo" album, as shown by a in fig. 3A, and the name of the album storing AI photos is not limited in this embodiment of the present application, and may be also referred to as a "one-record-multiple-photo" album.
Referring to a in fig. 3A, the user clicks on the photo control 32 and the terminal may display an interface that displays gallery data in the form of a photo tab. Referring to b in fig. 3A, an album control 31, a photo control 32, and photos displayed in chronological order may be included in the interface for displaying gallery data in the form of photo tabs. In one embodiment, the photos and videos in the camera album, as well as the photos in the screenshot album, may be displayed in the interface of the photo tab.
In one embodiment, to distinguish an original video from a normal video (no AI photos generated) for which AI photos have been generated, when a cover of the original video is displayed in a camera album, video album, or other album that is custom added by a user, an identifier may be added to the cover of the original video, the identifier characterizing that the original video has generated AI photos. By way of example, a in fig. 3B is the same as a in fig. 3A, when the user clicks on the camera album 33, the terminal may display a thumbnail of the photo included in the camera album, and a cover of the video, such as the cover of video 1, may display an identification "AI"34 to characterize video 1 as the original video for which the AI photo has been generated, as shown by B in fig. 3B.
In one embodiment, when no highlight is included in the original video, i.e., no AI photograph is generated, if the original video generates an AI profile, an identification "AI"34 may be displayed on the cover of video 1. It should be understood that the present embodiment relates to a scene of "the original video has generated AI photos".
Because the AI photos generated by the original video are associated with the original video, the original video can also be said to be associated with the AI photos generated by the original video, and therefore, in the embodiment of the application, when a user views the original video, the AI photos associated with the original video can be viewed on the playing interface of the original video.
In one embodiment, AI photos may be displayed in association on the playing interface of the original video in the camera album, but not on the playing interface of the original video in all photos, video, etc. virtual albums, and my collection, recycle bin, smart album, map album, search results, hidden album, shared album.
In one embodiment, AI photos may be displayed in association on the playing interface of the original video in the camera album, all photo albums, video albums, my collection, smart album, map album, search results, and not in association on the playing interface of the original video in the recycle bin, hidden album, shared album.
In one embodiment, albums with associated displays and albums without associated displays may be preset.
In one embodiment, the terminal may play the original video when the user clicks on the cover of the original video. Illustratively, referring to B in fig. 3B, the user clicks on the cover (thumbnail, which may be the picture of the first frame of video) of video 1, and the terminal may play video 1. In one embodiment, as shown at c in fig. 3B, the interface for playing video 1 may include: a play area 35, an association area 36, and an operation area 37.
The play area 35 may display a picture of the video 1. In one embodiment, the terminal may display video 1 directly in the play area 35 in response to the user clicking on the cover of the original video (or the terminal may display video 1 and a pause control that the user operates may trigger the terminal to play video 1). In one embodiment, as shown at c in fig. 3B, the interface for playing video 1 may include: a play area 35, an association area 36, and an operation area 37. In one embodiment, the user clicking on the cover of video 1 may be referred to as a first operation. In one embodiment, the interface shown at c in FIG. 3B may be referred to as a first interface. It should be understood that in the embodiment of the present application, in c in fig. 3, the video 1 has been played for a period of time.
In one embodiment, an icon 351, such as a "one-touch-and-big-sheet" identifier, may be displayed on the playing area 35, and the user clicks the "one-touch-and-big-sheet" identifier to trigger the terminal to generate an AI video according to the AI configuration file. The one-touch tablet functionality is not involved in embodiments of the present application, and therefore the "one-touch tablet" designation is not shown in the following figures.
In one embodiment, the playing time progress bar 352 of the video 1 may also be displayed on the playing area 35, so that the user may perform an adjustment operation on the playing progress of the video 1, etc., and it should be understood that the following embodiments do not relate to the adjustment of the playing progress of the video 1, and thus the playing time progress bar 352 of the video 1 is not shown.
The association region 36 may include: a thumbnail of the cover of the original video (video 1), and a thumbnail of the AI photograph associated with the original video. Video 1 is characterized in text in c in fig. 3B, and the AI photos associated with video 1, which may include AI photo 1, AI photo 2, and AI photo 3, for example. It should be understood that in one embodiment, a thumbnail may be understood as a picture that is a scaled down original. Wherein, the user clicks different thumbnails, which can trigger the terminal to switch the content displayed in the playing area 35. It should be understood that AI photo 3 is not fully displayed in c of fig. 3B due to the page size. In one embodiment, the thumbnail of the cover of video 1 displayed by the association area 36 may be referred to as an identification of video 1, and the thumbnail of the AI photograph associated with the original video may be referred to as an identification of the photograph. In the embodiment of the application, the thumbnail is used for representing the identification of the video 1 and the identification of the photo, but text or other forms can be used for representing the identification of the video 1 and the identification of the photo, and the embodiment of the application is not limited to the above.
Illustratively, where the play area 35 of c in fig. 3B is playing video 1 and the user clicks on AI photo 1, the terminal may display AI photo 1 in play area 35 as shown by d in fig. 3B. When the user clicks the thumbnail of the cover of the video 1 again, the terminal may continue playing the video 1 in the playing area 35, i.e. may continue playing the video 1 following the progress of playing the video 1 in c in fig. 3B.
The first operating region 37 may include, but is not limited to: share control 371, delete control 372, and more controls 373. The control included in the first operation area 37 is not limited by the embodiment of the present application, and the control used in the embodiment of the present application and other controls are shown in fig. 3B, and may be referred to as the drawing.
In the following embodiments, the user operates the more controls 373 to trigger the terminal to execute the corresponding operation. The user may operate the more controls 373 to trigger the terminal to perform operations such as moving and copying the gallery data, and the following embodiments describe a process of the terminal to perform operations such as moving and copying the gallery data.
It should be appreciated that in one embodiment, when video 1 is included in the photo tab, the user may also play video 1 in the photo tab to display the same interface as c in fig. 3B, on which operations may be performed as described in connection with the embodiments below.
Before introducing the process of moving, copying and other operations of the terminal on the gallery data in the embodiment of the application, firstly, the process of moving the gallery data by the terminal in the prior art is described:
1. in the prior art, gallery data (taking a photo as an example for illustration, a process of moving video and moving the photo) in a photo album of a mobile entity of a terminal:
1) Large graph movement
The large-drawing movement refers to an operation of displaying a photo on an interface of the terminal and moving the photo performed on the interface.
Taking an example of a solid album as a camera album, referring to a in fig. 4A, a user opens photo 1 in the camera album, and the terminal may display photo 1 and the first operation area 37 on the interface. The user clicks the more controls 373 in the first operation region 37, and the terminal may display the first operation menu 41. As shown in b in fig. 4A, the first operation menu 41 may include, but is not limited to: a move control 411, a copy control 412, and the like. It should be understood that, in b in fig. 4A, controls used in the embodiment of the present application are displayed in the first operation menu 41, and other controls such as renaming, adding notes, etc. may be further included in the first operation menu 41, which are not shown in fig. 4A.
Clicking the move control 411 by the user may trigger the terminal to display a movable album that does not contain the camera album that opened photo 1, as well as the virtual album. As shown in c in fig. 4A, the movable album includes: a screen capturing album, a user-defined album, such as an album named "like". The user clicks on the screen capture album, the terminal may move photo 1 in the camera album to the screen capture album, and correspondingly, the terminal may display the next gallery data (video or photo) of photo 1, as shown by d in fig. 4A.
In the scenario where the entity album is moving, because the terminal displays the next content of the photo 1 after the photo 1 is moving, the user can feel that the gallery data (e.g. the photo 1) has completed moving.
2) Multi-choice movement
The multi-selection movement refers to that on an interface where the terminal displays thumbnail images of a plurality of gallery data, a user operation triggers the terminal to execute movement of the plurality of gallery data.
Taking an example of taking an entity album as a camera album, a thumbnail of gallery data contained in the camera album is shown in a in fig. 4B, and a user pressing any gallery data for a long time may trigger the terminal to display the second operation area 38, as shown in B in fig. 4B.
In one embodiment, the second operational area 38 may be the same as the first operational area 37. In one embodiment, the second operation region 38 may be different from the first operation region 37, and exemplary, the second operation region 38 may include: a share control 371, a full select control 381, a delete control 372, and more controls 373, etc.
If the user selects 2 files and then clicks on the more controls 373 in the second operation area 38, the terminal may display a second operation menu 42, as shown by c in fig. 4B, the second operation menu 42 shown by c in fig. 4B being identical to the first operation menu 41 shown by B in fig. 4A. In one embodiment, the controls included in the second operation menu 42 may be the same as or different from the first operation menu, and illustratively, the second operation menu 42 may include: a move control 411 and a copy control 412.
The user clicks the mobile control 411, which may trigger the terminal to display a mobile album, where the mobile album does not include a camera album and a virtual album. As shown in d in fig. 4B, the movable album includes: a screen capturing album, a user-defined like album, etc. Illustratively, the user clicks on the screen capture album, the terminal may move the two files selected by the user to the screen capture album, and correspondingly, the terminal may return to display gallery data in the camera album, where the gallery data does not include the 2 files moved by the user, as shown by e in fig. 4B. It should be appreciated that e in fig. 4B changes from 8 to 6 in the number of gallery data characterizing the camera album, the gallery data does not contain 2 files that have been moved.
In this way, in the scenario of the multi-choice movement of the entity album, the moved gallery data is not displayed in the entity album after the plurality of gallery data are moved, so that the user can feel that the gallery data are moved.
In one embodiment, the process of moving gallery data in a physical album in the prior art may be simplified to that shown as a in fig. 4C.
2. In the prior art, a terminal moves gallery data in a virtual album (taking a photo as an example for illustration, a process of moving video and moving the photo):
1) Large graph movement
By way of example, taking a virtual album as an example of all photo albums, referring to a in fig. 5A, the user opens photo 1 in all photo albums, and the terminal may display photo 1 and the first operation area 37 on the interface. The user clicks the more controls 373 in the first operation region 37 and the terminal may display the first operation menu 41 as shown by b in fig. 5A. Clicking the mobile control 411 by the user may trigger the terminal to display a mobile album, which may include all entity albums in the gallery application, as shown in c in fig. 5A, as the mobile album includes: camera albums, screen capturing albums, and user-defined "like" albums. The user clicks on the camera album, the terminal may move photo 1 in all photo albums to the camera album, and accordingly, the terminal may return to displaying photo 1 (because the photos in all photo albums include the photos in the camera album), as shown by d in fig. 5A.
In the scene of moving the large picture of the virtual album, because the terminal still displays the moved picture or video after moving the picture or video in the virtual album, the user experiences the movement failure, and the user experience is low.
2) Multi-choice movement
Taking a virtual album as an example of all photo albums, a in fig. 5B is a thumbnail of gallery data included in all photo albums, and a user pressing any gallery data for a long time may trigger the terminal to display the second operation area 38, as shown in B in fig. 5B. If the user selects 2 files and then clicks on more controls 373 in the second operating region 38, the terminal may display a second operating menu 42, as shown by c in fig. 5B. Wherein, clicking the move control 411 by the user may trigger the terminal to display a movable album, which may refer to the album shown in c in fig. 5A, as shown by d in fig. 5B. Illustratively, the user clicks on the screen capture album, the terminal may move the two files selected by the user to the screen capture album, and correspondingly, the terminal may return to display gallery data in all photo albums, where the gallery data includes 2 files that the user has moved, as shown by e in fig. 5B. It should be appreciated that e in fig. 5B represents that the gallery data includes 2 files that the user has moved, taking the example that the gallery data included in all photo albums is still 12.
In the scene of the multi-selection movement of the virtual album, because the terminal still displays the moved photos or videos after moving the photos or videos in the virtual album, the user experiences movement failure and the user experience is low.
In one embodiment, the process of moving gallery data in a physical album in the prior art may be simplified to that shown as b in fig. 4C.
3. In the prior art, a terminal moves gallery data in a photo tab (a photo is taken as an example for illustration, and a process of moving video and photo):
1) Large graph movement
The photo tab includes photos in the camera album and photos in the screenshot album.
Taking moving photo 1 as an example of a photo in a camera album, referring to a in fig. 5C, a user opens photo 1 in a photo tab, and the terminal may display photo 1 and a first operation area 37 on the interface. The user clicks the more controls 373 in the first operation region 37 and the terminal may display the first operation menu 41 as shown by b in fig. 5C. Clicking the move control 411 by the user may trigger the terminal to display a movable album, as shown at C in fig. 5C, it being understood that the movable album shown at C in fig. 5C is identical to the movable album shown at C in fig. 5B. Illustratively, if the user clicks on the screen capture album, the terminal may move photo 1 to the screen capture album, and correspondingly, the terminal may return to displaying a photo tab in which photo 1 is also displayed, as shown by d in fig. 5C.
Notably, because the photo tab includes the photos in the screen capture album, if the user operates to move the photo 1 into the album included in the photo tab, the terminal still displays the photo 1 in the screen capture album when returning to display the photo tab, so that the user experiences that the movement fails and the user experience is low.
In one embodiment, if in C in fig. 5C, the user clicks on an album not included in the photo tab, such as a "like" album, and accordingly, the terminal may return to displaying the next content of photo 1 in the photo tab, as shown by e in fig. 5C. In this embodiment, since the photo tab does not include the "like" album, after the user moves photo 1 to the "like" album, the terminal displays the next content of photo 1 in the photo tab, and the user can feel that photo 1 has completed the movement.
In summary, in the scenario of photo tab large graph movement, whether the user experiences the gallery data movement (or the user experience of moving the gallery data) depends on whether the moved album is contained in the photo tab. If the moved album is not contained in the photo tab, the user can feel that the gallery data is moved.
2) Multi-choice movement
The same problems as those of the scene of the photo tab large map movement exist in the scene of the photo tab multi-selection movement, and the moving process in the prior art is not illustrated by the drawings in the embodiment of the application.
In one embodiment, the process of moving gallery data in a photo tab of the prior art may be simplified to that shown as C in fig. 4C.
In order to improve user experience in gallery data, a user is prevented from mistaking the gallery data as not moving, in one embodiment, the user may feel that the gallery data is moving in a scene of moving the gallery data in the entity album, so that the gallery data movement of the entity album may not be adjusted. In this embodiment, the moving process of gallery data in the entity album in the embodiment of the present application may be simplified to a1 in fig. 6C, and it should be understood that a1 in fig. 6C is the same as a in fig. 4C.
In one embodiment, in a scene of moving gallery data in an entity album, in order to further improve user experience, the embodiment of the application can display prompt information after the gallery data is moved, wherein the prompt information is used for prompting a user that the gallery data has been moved to a target album. It should be appreciated that the target album is a user-selected album, such as a user-selected screen-capture album in fig. 5C, or the like.
1. In the embodiment of the application, the gallery data (the process of moving the video and the photo) in the photo album of the mobile entity of the terminal is illustrated by taking the photo as an example:
for example, in a scenario where the entity album large drawing moves, as shown by d in fig. 6A, a prompt message, such as "moved to screen capture album", may be displayed on the interface where the terminal returns the next gallery data of photo 1. It should be appreciated that a-c in FIG. 6A are the same as a-c in FIG. 4A, respectively. It should be appreciated that the movement of the video-associated AI photos may be exemplified with reference to photo 1.
For example, in the scenario of the entity album multi-choice movement, as shown in e in fig. 6B, the terminal returns and displays gallery data in the camera album, where the gallery data does not include 2 files that have been moved by the user, and the terminal may display a prompt message on the interface, such as "moved to screen capture album". It should be understood that a-d in FIG. 6B are the same as a-d in FIG. 4B, respectively. In one embodiment, the hint information that indicates that a photo (including AI photos) has been moved to a target album may be referred to as a first hint information.
In this embodiment, the moving process of gallery data in the entity album in the embodiment of the present application may be simplified as a2 in fig. 6C.
In one embodiment, for a scene of gallery data movement in a virtual album and a scene of gallery data movement in a photo tab, the embodiment of the application may prompt after the gallery data movement to remind a user that the gallery data movement is completed so as to improve user experience, and the following sequentially describes the scene of gallery data movement in the virtual album and the scene of gallery data movement in the photo tab:
2. in the embodiment of the application, the terminal moves the gallery data (taking a photo as an example for illustration) in the virtual album, and the video movement and the photo movement process are as follows:
because in the scene of moving the gallery data in the virtual album, the terminal still displays the moved gallery data after the gallery data is moved, in order to avoid the user from feeling that the gallery data is failed to move, in the embodiment of the application, prompt information can be displayed on the interface of the terminal for displaying the moved gallery data, and the prompt information is used for prompting the user that the gallery data is moved.
For example, in the scenario where the virtual album large map is moved, as shown by d in fig. 7A, the terminal returns to display a prompt message, such as "moved to camera album", on the interface displaying photo 1. It should be understood that a-c in FIG. 7A are the same as a-c in FIG. 5A, respectively.
For example, in the scenario of the virtual album multi-choice movement, as shown in e in fig. 7B, the terminal returns to display gallery data in all photo albums, where the gallery data contains 2 files that the user has moved, and the terminal may display a prompt message on the interface, such as "moved to screen capture album". It should be understood that a-d in FIG. 7B are the same as a-d in FIG. 5B, respectively.
In one embodiment, the moving process of gallery data in the virtual album in the embodiment of the present application may be simplified as shown in b in fig. 6C.
3. In the embodiment of the application, the terminal moves the gallery data in the photo tab (the photo is taken as an example for illustration, and the process of moving the video and the photo):
because in the scene of moving the gallery data in the photo tab, whether the terminal displays the moved gallery data after the gallery data is moved depends on whether the photo tab contains the album to which the moved gallery data belongs, in order to avoid the problem that the user feels that the gallery data is failed to move because the photo tab contains the album to which the moved gallery data belongs, in the embodiment of the application, prompt information can be displayed on the interface of the terminal displaying the moved gallery data, and the prompt information is used for prompting the user that the gallery data is moved.
In an exemplary scenario of moving a photo tab, when the photo tab includes an album (e.g., a screen capture album) to which the moved gallery data belongs, as shown by d in fig. 8A, the terminal returns to display the photo tab, where photo 1 is displayed, and the terminal may display a prompt message on the interface, such as "moved to the screen capture album" to prompt the user that the movement is completed. It should be appreciated that a-C in FIG. 8A are the same as a-C in FIG. 5C, respectively.
Similarly, as shown in e in fig. 8A, when the user clicks on an album not included in the photo tab, such as a "like" album, the terminal may return to displaying the photo tab, in which photo 1 is not displayed, and to enhance the user experience, the terminal may display a prompt message on the interface, such as "moved to" like "album" to prompt the user that the movement is completed.
Similarly, in the scenario of multi-selection movement of the photo tab, when the movement of the gallery data is completed, the terminal may display a prompt message on the interface displayed by the terminal to prompt the user that the movement of the gallery data is completed, which may be shown in fig. 8B.
In one embodiment, the moving process of gallery data in the virtual album in the embodiment of the present application may be simplified to be shown in C1 and C2 in fig. 6C. When the photo tab contains the album to which the moved gallery data belongs, a prompt message may be displayed, and when the photo tab does not contain the album to which the moved gallery data belongs, the prompt message may not be displayed.
In the embodiment of the application, in the scene of moving the entity album gallery data, the virtual album gallery data and the scene of moving the photo tab gallery data, after the gallery data is moved, the terminal can display the prompt information on the interface of the terminal to prompt the user that the gallery data is moved, so that the user can know the gallery data to be moved successfully no matter what the moved gallery data or the next content is displayed on the interface, and the user experience is high.
In the embodiment of the present application, in a scene where an original video may generate an AI photo, the embodiment of the present application may also move the original video and the AI photo, and it should be noted that in the embodiment of the present application, whether the terminal displays the video 1 is associated with the AI photo or not may support movement of the AI photo, and the AI photo after movement may maintain an association relationship with the video 1. In addition, when the terminal displays the video 1 and displays the AI photo associated with the video 1, after the AI photo or the video 1 moves, the terminal may continue to display the video 1 and the AI photo associated with the video 1 and display a prompt message on the interface to prompt that the video 1 or the AI photo has moved.
Illustratively, as shown by c in fig. 3B, i.e., when the terminal displays a screen of the original video (video 1), the user clicks the more control 373, which may trigger the terminal to display the first operation menu 41. The user operates the move control 411, the terminal may move the video 1 to the album selected by the user, and the process of moving the video 1 by the terminal may refer to the moving process in the prior art, or may refer to the process of moving the gallery data in the above 1-3 in the embodiment of the present application.
A in fig. 9A is the same as d in fig. 3B, that is, when the terminal displays the AI picture associated with video 1, the user clicks the more control 373, the terminal may be triggered to display the first operation menu 41, the user operates the move control 411, and the terminal may display a movable album, as shown by B-c in fig. 9A. It should be understood that, depending on the user opening the video 1 in the "physical album" or "virtual album", the album included in the removable album may be described with reference to the related description in the above embodiment, and in the embodiment of the present application, the user opening the video 1 in the camera album is taken as an example, and accordingly, the camera album is not included in c in fig. 9A.
For example, if the user selects the screen capture album among the selectable albums, the terminal may move the AI photo 1 to the screen capture album, and accordingly, since the AI photo 1 is still stored locally, only the stored location is changed, the terminal may return to displaying the interface shown as a in fig. 9A. To prompt the user that the AI photo 1 has been moved successfully, the terminal may display a prompt message, such as "moved to screen capture album," on the interface shown as a in fig. 9A, as shown as d in fig. 9A. Wherein the interface shown by a in fig. 9A may be referred to as a first interface.
In the embodiment of the application, the original video can be moved according to the moving method in the prior art, or in order to improve the user experience, the original video can be moved according to the modes 1-3 in the embodiment of the application. For the AI photo associated with the video 1, in the embodiment of the application, the prompt information can be displayed after the AI photo is moved so as to prompt the user that the movement is successful, so that the user experience can be improved, and the moving mode can be specifically described by referring to the related description in 1-3 in the embodiment of the application.
In one embodiment, in a scene of moving photos, videos, the album selected by the user (i.e., the album into which the photos, videos are moved) may be referred to as a first target album, and in a scene of copying photos, videos, the album selected by the user (i.e., the album into which the photos, videos are copied) may be referred to as a second target album.
In the following description of the process of copying gallery data in the embodiment of the present application, in one embodiment, video 1' after video 1 is copied may maintain the association relationship with the AI photo associated with video 1, and after video 1 or video 1' is deleted, the video (e.g., video 1' or video 1) that is not deleted may maintain the association with the AI photo.
For example, as shown by a in fig. 9B, when the terminal displays a screen of an original video (video 1), the user clicks the more control 373, and the terminal may be triggered to display the first operation menu 41. As shown in B of fig. 9B, the user operates the copy control 412, and the terminal may display a replicable album, which may refer to the related description of the movable album in the above-described embodiment. In the embodiment of the present application, taking the example of opening the video 1 from the camera album, the corresponding reproducible album does not include the camera album, as shown in fig. 9B c. For example, if the user selects a screen capture album, the terminal may copy video 1 to the screen capture album. After the terminal copies video 1, it may return to displaying the interface shown as a in fig. 9B. In one embodiment, to prompt the user that video 1 has been copied, a prompt may be displayed on the interface, such as "copied to screen capture album," as shown by d in FIG. 9B. In one embodiment, the information characterizing that the video or photograph has been copied to the target album may be referred to as second hint information.
Accordingly, when the terminal displays video 1' in the screen capturing album, the association area 36 may be displayed, and the thumbnail of video 1 and the thumbnail of the AI photo associated with video 1 are displayed in the association area 36, as shown by e in fig. 9B. In one embodiment, after video 1 is copied, the AI profile associated with video 1 may also be copied at the same time, so that a one-touch control may be displayed on the interface e as in fig. 9B to generate the AI video.
In the embodiment of the application, when the terminal displays the video 1, whether the terminal displays the AI photo associated with the video 1 or not, the terminal can support copying the AI photo, and in order to reduce the display of repeated content, the copied AI photo can not be associated with the video 1.
For example, as shown by a in fig. 9C, when the terminal displays AI photo 1 associated with video 1, the user clicks the more control 373, which may trigger the terminal to display the first operation menu 41. As shown at b in fig. 9C, the user operates the copy control 412 and the terminal may display a copy-able album as shown at C in fig. 9C. For example, if the user selects a screen capture album, the terminal may copy AI photo 1 to the screen capture album. After copying AI photo 1, the terminal may return to displaying the interface shown as a in fig. 9C. In one embodiment, to prompt the user that AI photo 1 has completed copying, a prompt may be displayed on the interface, such as "copied to screen capture album," as shown by d in FIG. 9C.
Accordingly, when the terminal displays the AI photo 1 in the screen capturing album, in order to reduce the display of the repeated content, the association area 36 may not be displayed on the interface on which the AI photo 1 is displayed, as shown by e in fig. 9C.
In addition, in a scenario in which the third party device copies video 1 or the AI photograph, the third party device may copy the association relationship of video 1 and the AI photograph at the same time, and thus the association region 36 may be displayed when the third party device displays video 1. In addition, in a scene in which the third party device copies the video 1 or the AI photograph, the third party device can evade the display of the repeated content when displaying the video 1 or the AI photograph.
In the embodiment of the application, the terminal can copy the video 1 and the associated AI photo of the video 1, is suitable for the use experience of the user, can carry out copy prompt on the user, and can not display the associated video 1 when displaying the copied AI photo, thereby improving the user experience.
In one embodiment, for a terminal, a video (e.g., video 1) in the terminal is associated with at least one photo, the at least one photo being generated based on the video, the at least one photo being included in a first album in a gallery application. Referring to fig. 10, the man-machine interaction method provided by the embodiment of the present application may include:
s1001, in response to a user operation on a thumbnail of a target photo displayed in a first album or photo tab, displaying a movement control, wherein when the thumbnail of the target photo is displayed in the photo tab, the photo in the first album is included in the photo tab, and the target photo is included in at least one photo.
S1002, in response to a user operating a mobile control, displaying an identification of a movable album.
S1003, moving the target photo to the target photo album in response to the operation of the user on the identification of the target photo album in the identifications of the movable photo albums.
S1004, displaying first prompt information, wherein the first prompt information is used for indicating that the target photo is moved to the target album.
In the embodiment of the present application, S1001 to S1004 may refer to fig. 6A to fig. 8B, and the related description of fig. 9A. In the embodiment of the application, the video can be shared, the related description of photo sharing can be referred, the video and the photo can be copied, and the description in fig. 9B-9C can be referred.
Fig. 11 is a schematic structural diagram of a man-machine interaction device according to an embodiment of the present application. The man-machine interaction device may be a terminal as in the above embodiment, or a chip in the terminal. Video is associated with at least one photo, the at least one photo is generated based on the video, the at least one photo is contained in a first album in a gallery application, and referring to fig. 11, the human-machine interaction device may include: a display module 1101 and a processing module 1102.
A display module 1101, configured to display a movement control in response to a user's operation on a thumbnail of a target photo displayed in the first album or the photo tab, where the photo in the first album is included in the photo tab and the target photo is included in the at least one photo when the thumbnail of the target photo is displayed in the photo tab; and displaying an identification of the movable album in response to the user operating the movement control.
And a processing module 1102, configured to move the target photo to the target album in response to the user operating the identifier of the target album from the identifiers of the movable albums.
The display module 1101 is further configured to display a first prompt, where the first prompt is used to indicate that the target photo has been moved to the target album.
In one possible implementation, the operation of the thumbnail of the target photo is used to trigger the terminal to display the target photo. The display module 1101 is specifically configured to display a first interface, where the target photo is displayed on the first interface, and a first operation area, where the first operation area includes the movement control.
In one possible implementation, the operation of the thumbnail of the target photograph is used to select the target photograph. The display module 1101 is specifically configured to display a second operation area, where the second operation area includes the movement control.
In one possible implementation, the second operating region is different from the first operating region.
In one possible implementation, the display module 1101 is specifically configured to:
when the user operates the thumbnail of the target photo displayed in the first album, displaying the first prompt information according to the type of the first album; or,
And when the user operates the thumbnail of the target photo displayed in the photo tab, displaying the first prompt information according to whether gallery data in the target album is contained in the photo tab.
In a possible implementation manner, when the first album is a physical album, the display module 1101 is specifically configured to display a next content arranged after the target photo in the first album, and display the first prompt information. When the first album is a virtual album, the display module 1101 is specifically configured to display the target photo, and display the first prompt information.
In a possible implementation manner, when gallery data in the target album is included in the photo tab, the display module 1101 is specifically configured to display the target photo and display the first prompt information; when the gallery data in the target album is not included in the photo tab, the display module 1101 is specifically configured to display a next content arranged after the target photo in the photo tab, and display the first prompt information.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the movable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the movable album includes all physical albums; or,
When the user operates the thumbnail of the target photo displayed in the photo tab, the movable photo album contains all entity photo albums.
In one possible implementation, when the thumbnail of the video associated with the at least one photo is displayed in the first album and the photo tab, the display module 1101 is further configured to display a first interface in response to the user operating on the thumbnail of the video displayed in the first album or the photo tab, where a display area of the first interface displays a screen of the video, and the first interface further includes: an association region including an identification of the video and an identification of at least one photograph associated with the video, the at least one photograph generated based on the video, and a first operational region including a movement control therein; displaying the target photo in the display area in response to the user operating the identification of the target photo on the first interface, and displaying the identification of the movable album in response to the user operating the movement control.
The processing module 1102 is further configured to move the target photo to the target album in response to an operation of the user on the identification of the target album.
The display module 1101 is further configured to display a first prompt message on a first interface on which the target photo is displayed, where the first prompt message is used to indicate that the target photo has been moved to the target album.
In a possible implementation manner, the target album is a first target album, and the first operation area further includes a copy control, and the display module 1101 is further configured to, when the display area displays the frame of the video, display, in response to the user operating the copy control, an identification of the replicable album.
The processing module 1102 is further configured to copy the video to a second target album in response to an operation of the user on the identifier of the second target album in the identifiers of the replicable albums.
The display module 1101 is further configured to display a second prompt, where the second prompt is used to indicate that the video has been copied to the second target album.
In a possible implementation manner, the display module 1101 is further configured to display the first interface in response to the user triggering an operation of displaying the video by the terminal in the second target album.
In a possible implementation manner, the target album is a first target album, the first operation area further includes a copy control, and when the display area displays the target photo, the display module 1101 is specifically configured to display an identifier of the replicable album in response to the user operating the copy control.
The processing module 1102 is specifically configured to copy the target photo to a second target album in response to an operation of the user on the identifier of the second target album in the identifiers of the replicable albums.
The display module 1101 is further configured to display a second prompt, where the second prompt is used to indicate that the target photo is copied to the second target album.
In a possible implementation manner, the display module 1101 is further configured to display the target photo in response to the user triggering an operation of displaying the target photo by the terminal in the second target album, where the association area is not included on the interface on which the target photo is displayed in the second target album.
In one possible implementation manner, the first album is a physical album or a virtual album, when the first album is a physical album, the replicable album does not include the first album and includes other physical albums, and when the first album is a virtual album, the replicable album includes all physical albums; or,
when the user operates on the thumbnail of the video displayed in the photo tab, the replicable album contains all entity albums.
In a possible implementation manner, the display module 1101 is further configured to include more controls in the first operation area, and the display module 1101 is further configured to display at least one operable control in response to the user operating the more controls, where the at least one operable control includes the mobile control and the copy control.
The man-machine interaction device provided by the embodiment of the application can be used for realizing the man-machine interaction method in the embodiment, has the same realization principle and technical effect as the embodiment, and is not repeated herein.
In an embodiment, referring to fig. 12, an embodiment of the present application further provides an electronic device, where the electronic device may be a terminal as described in the foregoing embodiment, and the electronic device may include: a processor 1201 (e.g., a CPU), and a memory 1202. The memory 1202 may include a random-access memory (RAM) and may also include a non-volatile memory (NVM), such as at least one disk memory, in which various instructions may be stored in the memory 1202 for performing various processing functions and implementing method steps of the present application.
Optionally, the electronic device according to the present application may further include: a power supply 1203, a communication bus 1204, a communication port 1205, and a display 1206. The communication ports 1205 are used to enable connection communications between the electronic device and other peripheral devices. In an embodiment of the application, memory 1202 is used to store computer executable program code, which includes instructions; when the processor 1201 executes the instructions, the instructions cause the processor 1201 of the electronic device to perform the actions in the above-described method embodiments, which achieve similar principles and technical effects, and are not described herein again. In one embodiment, the display 1206 may be a display screen of an electronic device for displaying an interface of the electronic device.
It should be noted that the modules or components described in the above embodiments may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (application specific integrated circuit, ASIC), or one or more microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general purpose processor, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program code, such as a controller. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The term "plurality" herein refers to two or more. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship; in the formula, the character "/" indicates that the front and rear associated objects are a "division" relationship. In addition, it should be understood that in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not for indicating or implying any relative importance or order.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.
It should be understood that, in the embodiment of the present application, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.

Claims (19)

1. A method of human-machine interaction, wherein a video is associated with at least one photograph, the at least one photograph being generated based on the video, the at least one photograph being included in a first album in a gallery application, the method comprising:
displaying a movement control in response to a user operation on a thumbnail of a target photo displayed in the first photo album or the photo tab, wherein the photo in the first photo album is contained in the photo tab and the target photo is contained in the at least one photo when the thumbnail of the target photo is displayed in the photo tab;
responding to the user operation of the mobile control, and displaying the identification of the movable album;
responding to the operation of the user on the identification of the target album in the identification of the movable album, and moving the target photo to the target album;
and displaying first prompt information, wherein the first prompt information is used for indicating that the target photo is moved to the target album.
2. The method of claim 1, wherein the operation of the thumbnail of the target photograph is used to trigger a terminal to display the target photograph;
The displaying the mobile control comprises:
and displaying a first interface, wherein the target photo is displayed on the first interface, and a first operation area comprises the mobile control.
3. The method of claim 2, wherein the operation of the thumbnail of the target photograph is used to select the target photograph;
the displaying the mobile control comprises:
and displaying a second operation area, wherein the second operation area comprises the mobile control.
4. A method according to claim 3, wherein the second operating region is different from the first operating region.
5. The method of any one of claims 1-4, wherein displaying the first prompt message comprises:
when the user operates the thumbnail of the target photo displayed in the first album, displaying the first prompt information according to the type of the first album; or,
and when the user operates the thumbnail of the target photo displayed in the photo tab, displaying the first prompt information according to whether gallery data in the target album is contained in the photo tab.
6. The method of claim 5, wherein displaying the first reminder information according to the type of the first album comprises:
when the first album is an entity album, displaying the next content arranged behind the target photo in the first album, and displaying the first prompt information;
and when the first album is a virtual album, displaying the target photo and displaying the first prompt information.
7. The method of claim 5, wherein displaying the first prompt message according to whether gallery data in the target album is included in the photo tab comprises:
when the gallery data in the target album is contained in the photo tab, displaying the target photo and displaying the first prompt information;
and when the gallery data in the target album is not contained in the photo tab, displaying the next content arranged behind the target photo in the photo tab, and displaying the first prompt information.
8. The method of any one of claims 1-7, wherein the first album is a physical album or a virtual album, wherein when the first album is a physical album, the movable album does not include the first album and includes other physical albums, and wherein when the first album is a virtual album, the movable album includes all physical albums; or,
When the user operates the thumbnail of the target photo displayed in the photo tab, the movable photo album contains all entity photo albums.
9. The method of any of claims 1-8, wherein when thumbnails of the at least one photo-associated video are displayed in the first album and the photo tab, the method further comprises:
in response to the user's operation on the thumbnail of the video displayed in the first album or the photo tab, displaying a first interface, a display area of which displays a screen of the video, the first interface further comprising: an association region including an identification of the video and an identification of at least one photograph associated with the video, the at least one photograph generated based on the video, and a first operational region including a movement control therein;
displaying the target photo in the display area in response to an operation of the user on the first interface for identification of the target photo;
responding to the user operation of the mobile control, and displaying the identification of the movable album;
Moving the target photo to the target album in response to the user's operation of the identification of the target album;
and displaying first prompt information on a first interface on which the target photo is displayed, wherein the first prompt information is used for indicating that the target photo has moved to the target album.
10. The method of claim 9, wherein the target album is a first target album, the first operating area further comprising a copy control therein, the method further comprising:
when the display area displays the picture of the video, responding to the user to operate the copy control, and displaying the identification of the replicable album;
copying the video to a second target album in response to the user operating the identification of the second target album in the replicable albums;
and displaying second prompt information, wherein the second prompt information is used for indicating that the video is copied to the second target album.
11. The method according to claim 10, wherein the method further comprises:
and responding to the operation of triggering the terminal to display the video in the second target album by the user, and displaying the first interface.
12. The method of claim 9, wherein the target album is a first target album, the first operating area further comprising a copy control therein, the method further comprising:
when the display area displays the target photo, responding to the user to operate the copy control, and displaying the identification of the copy album;
copying the target photo to a second target album in response to the user operating the identification of the second target album in the replicable albums;
and displaying second prompt information, wherein the second prompt information is used for indicating that the target photo is copied to the second target album.
13. The method according to claim 12, wherein the method further comprises:
and responding to the operation of triggering the terminal to display the target photo in the second target photo album by the user, displaying the target photo, wherein the association area is not included on the interface of the second target photo album, on which the target photo is displayed.
14. The method of any one of claims 10-13, wherein the first album is a physical album or a virtual album, the replicatable album does not include the first album and includes other physical albums when the first album is a physical album, and the replicatable album includes all physical albums when the first album is a virtual album; or,
When the user operates on the thumbnail of the video displayed in the photo tab, the replicable album contains all entity albums.
15. The method of any of claims 10-14, wherein the first operational area includes more controls therein, the method further comprising, prior to the user operating the movement control:
and in response to the user operating the more controls, displaying at least one operable control, wherein the at least one operable control comprises the mobile control and the copy control.
16. A human-machine interaction device, wherein a video is associated with at least one photograph, the at least one photograph generated based on the video, the at least one photograph contained in a first album in a gallery application, the device comprising:
a display module for:
displaying a movement control in response to a user operation on a thumbnail of a target photo displayed in the first photo album or the photo tab, wherein the photo in the first photo album is contained in the photo tab and the target photo is contained in the at least one photo when the thumbnail of the target photo is displayed in the photo tab; the method comprises the steps of,
Responding to the user operation of the mobile control, and displaying the identification of the movable album;
a processing module, configured to move the target photo to the target album in response to an operation of the user on an identification of the target album among the identifications of the movable albums;
the display module is further used for displaying first prompt information, and the first prompt information is used for indicating that the target photo is moved to the target album.
17. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the method of any one of claims 1-15.
18. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program or instructions, which when executed, implement the method of any of claims 1-15.
19. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of any of claims 1-15.
CN202210193746.3A 2022-02-28 2022-02-28 Man-machine interaction method and device and electronic equipment Pending CN116701685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210193746.3A CN116701685A (en) 2022-02-28 2022-02-28 Man-machine interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210193746.3A CN116701685A (en) 2022-02-28 2022-02-28 Man-machine interaction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116701685A true CN116701685A (en) 2023-09-05

Family

ID=87832725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210193746.3A Pending CN116701685A (en) 2022-02-28 2022-02-28 Man-machine interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116701685A (en)

Similar Documents

Publication Publication Date Title
US11941708B2 (en) Method, apparatus, device and medium for posting a video or image
CN112954210B (en) Photographing method and device, electronic equipment and medium
EP2887238B1 (en) Mobile terminal and method for controlling the same
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN109683761B (en) Content collection method, device and storage medium
CN112954196B (en) Shooting method, shooting device, electronic equipment and readable storage medium
US10140005B2 (en) Causing elements to be displayed
WO2017107855A1 (en) Picture searching method and device
WO2022028495A1 (en) Picture photographing method and apparatus, and electronic device
CN103699621A (en) Method for recording graphic and text information on materials recorded by mobile device
CN106527886B (en) Picture display method and device
WO2024153191A1 (en) Video generation method and apparatus, electronic device, and medium
CN115967854A (en) Photographing method and device and electronic equipment
CN116701685A (en) Man-machine interaction method and device and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN112764599B (en) Data processing method, device and medium
CN114443567A (en) Multimedia file management method, device, electronic equipment and medium
CN116701684A (en) Man-machine interaction method and device and electronic equipment
EP4345650A1 (en) Human-machine interaction method and apparatus, and electronic device
CN114546229B (en) Information processing method, screen capturing method and electronic equipment
WO2023160143A1 (en) Method and apparatus for viewing multimedia content
TW201828118A (en) Picture search method and device searching in a search dimension for a second picture matching the first picture according to the search dimension corresponding to the first picture
CN117648144A (en) Image processing method, device, electronic equipment and readable storage medium
CN118673509A (en) Encryption method and device, electronic equipment and storage medium
CN116027950A (en) Screen capturing method and screen capturing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination