CN111752450A - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111752450A
CN111752450A CN202010469598.4A CN202010469598A CN111752450A CN 111752450 A CN111752450 A CN 111752450A CN 202010469598 A CN202010469598 A CN 202010469598A CN 111752450 A CN111752450 A CN 111752450A
Authority
CN
China
Prior art keywords
input
label
information
user
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010469598.4A
Other languages
Chinese (zh)
Inventor
郑祥赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010469598.4A priority Critical patent/CN111752450A/en
Publication of CN111752450A publication Critical patent/CN111752450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a display method, a display device and electronic equipment, wherein the method comprises the following steps: receiving a first input of a target label from a user; in response to a first input, hiding first information in a first preview screen to obtain a second preview screen under the condition that the first information associated with the target label is identified from the first preview screen; and displaying a second preview screen. The area which the user wants to hide in the preview picture can be quickly and conveniently hidden.

Description

Display method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a display method and device and electronic equipment.
Background
At present, when a user views an image, some areas which the user wants to hide often appear in the image, for example, passersby passing by the side, a garbage bin entering in a corner, or a location identifier exposing private information, and the like.
Therefore, how to quickly and conveniently hide the area in the shot work, which the user wants to hide, becomes a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a display method, a display device and electronic equipment, which can quickly and conveniently hide an area which a user wants to hide in a preview picture.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a display method, where the method may include:
receiving a first input of a target label from a user;
in response to a first input, hiding first information in the first preview screen to obtain a second preview screen under the condition that the first information associated with the target label is identified from the first preview screen;
and displaying a second preview screen.
In a second aspect, an embodiment of the present invention provides a display device, which may include:
the receiving module is used for receiving a first input of a user to the target label;
the hiding module is used for responding to a first input, and hiding first information in the first preview picture to obtain a second preview picture under the condition that the first information associated with the target label is identified from the first preview picture;
and the display module is used for displaying the second preview picture.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and when the computer program is executed by the processor, the display method according to the first aspect is implemented.
In a fourth aspect, there is provided a readable storage medium having stored thereon a computer program for causing a computer to execute the display method according to the first aspect if the computer program is executed in the computer.
According to the method, the first information (such as privacy information) related to the target label in the first preview screen is hidden in response to the input of the target label by a user, and the second preview screen after the first information is hidden is displayed. The user that can be convenient fast in the first preview picture of hiding wants the region of hiding, can not influence the whole aesthetic feeling of the second preview picture after hiding, reaches better display effect, simplifies the work of post processing.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic view of an application scenario of a display method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a display method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first display interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a second display interface according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a third display interface according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a fourth display interface according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a fifth display interface according to an embodiment of the invention;
fig. 8 is a flowchart of a display method based on scene one according to an embodiment of the present invention;
fig. 9 is a flowchart of a display method based on scene two according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a display device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Currently, in the process of shooting by using an electronic device, there are two scenarios, which are specifically described below:
scene one: the user wants to hide part of the information of the imaged picture. When sharing the picture, the user wants to hide some information in the picture. As shown in fig. 1, for example, in order to remove private information (for example, an address) related to a picture, a conventional solution is to fill a mosaic with a portion to be hidden and cover the original information. Or, a complicated image editing process is performed, but such a concealment process is complicated and takes a long time, and it is not easy for the user to grasp the image editing process technique.
Scene two: the user wants to hide part of information of the photographing preview object. For example, when taking a picture, it is sometimes desirable to remove some unnecessary information and to see the real-time display effect, but because the picture cannot be previewed, it is very laborious to modify some regions to be hidden in the imaged picture in the later period. Or, when the video is recorded, some suddenly-intruding objects interfere with the quality of the whole video, and when the interference is serious, the video is required to be recorded again, so that the shooting experience of a user is reduced.
Based on the application scenario, embodiments of the present invention provide a display method and an electronic device, so as to solve a problem that a user is difficult to hide an area of an image that the user wants to hide in the related art. The following description will be made in detail with reference to the display method provided by the embodiment of the present invention.
Fig. 2 is a flowchart of a display method according to an embodiment of the present invention.
As shown in fig. 2, the display method may include S210-S230, and the method is applied to an electronic device, and specifically as follows:
s210, receiving a first input of a target label from a user.
And S220, in response to the first input, hiding the first information in the first preview screen to obtain a second preview screen when the first information associated with the target label is identified from the first preview screen.
S230, a second preview screen is displayed.
According to the display method provided by the invention, the first information (such as privacy information) related to the target label in the first preview picture is hidden in response to the input of the target label by a user, and the second preview picture after the first information is hidden is displayed. The user that can be convenient fast in the first preview picture of hiding wants the region of hiding, can not influence the whole aesthetic feeling of the second preview picture after hiding, reaches better display effect, simplifies the work of post processing.
The contents of S210-S230 are described below, respectively:
first, referring to S210, a first input of a target tag by a user is received.
Before specifically describing the S210, the present invention further provides a method for presetting a plurality of tags, so as to receive a first input of a user selecting a target tag from the plurality of tags when the plurality of tags are displayed.
First, in one possible embodiment, how to set multiple tags is described in the following two ways.
The first method is as follows: first tag information identified from the first tag is stored in association with the first tag.
The second method comprises the following steps: and storing the first mark information selected by the user from the first image and the first label edited by the user in a related mode.
The following are specifically described below:
the first method is as follows: as an implementation manner of the present application, in order to increase the richness of the first tag, before receiving the first input of the user target tag, the following steps may be further included:
receiving a fourth input of the first label on the first display interface from the user under the condition that the first display interface comprises the first image; marking first marking information related to the first label in the first image in response to a fourth input; first tag information is stored in association with the first tag.
In order to facilitate a user to quickly process a target object, first label information and a first label having an association relationship may be preset, and when a first image is displayed on a first display interface, an editable mode is performed on a picture area of the first display interface, and first, a touch input of the user to the first label on the first display interface is received. Then, in response to a fourth input, marking first marking information associated with the first label in the first image, for example, the user clicks the "trash can" label, the electronic device automatically identifies a frame region containing the trash can, and marks the frame region of the trash can associated with the "trash can" label in the first image. Finally, the first tag information is stored in association with the first tag.
For example, as shown in fig. 3, a picture taken in spring may often have a butterfly in the mirror, and some users may not want the butterfly in the picture taken based on the usage habits of the users. To hide the butterfly in the picture, it can be handled as follows: first, a fourth input of the first label 'butterfly' on the first display interface by the user is received. Then, in response to the fourth input, the electronic device automatically identifies the screen content covered by the first label "butterfly", and outlines the screen content covered by the "butterfly" with a dotted line (dotted line range in fig. 3) to display the marked first mark information to the user, i.e., mark the first mark information related to the first label in the first image. And finally, storing the picture content covered by the first mark information butterfly in association with the first label butterfly.
As another implementation manner of the present application, in order to improve the accuracy of the picture range corresponding to the first tag and further improve the accuracy of regional picture content hiding, after marking first marking information related to the first tag in the first image in response to the fourth input, before storing the first marking information in association with the first tag, the method may further include the following steps:
receiving a sixth input of the user; responding to the sixth input, and adjusting the first marking information to obtain second marking information; storing the first mark information and the first label in association, specifically comprising: the second tag information is stored in association with the first tag.
Due to the fact that the accuracy of the first mark information automatically recognized by the electronic equipment is further improved, after the first mark information related to the first label is marked, the adjustment of the first mark information, namely the picture content covered by the butterfly, by a user can be received, and second mark information is obtained; and storing the picture content covered by the adjusted butterfly, namely the second mark information in association with the first label butterfly.
The second method comprises the following steps: as another implementation manner of the present application, in order to improve the accuracy of the picture content represented by the first tab and enable the user to quickly imagine the corresponding picture when seeing the tab, before receiving the first input of the user to the target tab, the method may further include the following steps:
receiving a fifth input that a user selects first mark information from the first image under the condition that the first display interface comprises the first image; in response to a fifth input, displaying an editable label associated with the first label information; receiving a first label input by a user in an editable label; first tag information is stored in association with the first tag.
In order to facilitate a user to quickly process a target object, first mark information and a first label which have an association relationship may be preset, and when a first image is displayed on a first display interface, a picture area of the first display interface is changed into an editable mode. After the first mark information is determined, then, long-pressing is carried out in the first mark information area, a label capable of inputting the content is popped up, namely, a first label for describing the picture content represented by the first mark information is input, and then, the first label input by a user in the editable label is received. Finally, the first tag information is stored in association with the first tag.
For example, a picture taken in spring may often have a butterfly-in mirror, and some users may not want to have a butterfly appearing in the taken picture based on the usage habits of the users, which affects the aesthetic feeling of the image. Then, as shown in fig. 4, by long-pressing in the first label information area, the first display interface displays an editable tab associated with the butterfly. Then, the butterfly input by the user in the editable label, namely the first label, is received. Finally, the first label information (the picture content covered by the butterfly) is stored in association with the first label ("butterfly").
Then, based on the plurality of tags stored in the first and second modes, the user is enabled to determine a target tag from the plurality of pre-stored tags.
The receiving of the first input to the target tab by the user referred to above may be receiving the first input to the target tab before receiving the photographing operation, that is, receiving the first input before displaying the first preview screen. The first input to the target tab may also be received during the capturing of the preview screen, i.e., after the first preview screen is displayed.
Illustratively, as shown in fig. 5, in response to a user's touch input to a tab button on the first display interface, a drop-down menu (tab 1, tab 2, etc.) associated with the tab is displayed on the first display interface for the user to select one or more target tabs from, and then a first user input to the target tab on the first display interface is received.
As another implementation manner of the present application, in order to enable a user to have more tags selectable and better conform to the operation behavior habit of the user when providing a tag to be selected for the user, after the first tag information is stored in association with the first tag, the following steps may be further included:
and uploading the first tags to a cloud server so that the cloud server can classify the first tags uploaded by the plurality of users according to preset processing conditions.
Some common first tags and first mark information associated with the first tags can be uploaded to a cloud server to be shared with other users, and the cloud server can automatically process and classify according to preset processing conditions, so that the camera shooting effect is further improved.
Illustratively, the landscape landmark building is under maintenance and cannot be avoided when taking a picture. When a user visits and shoots at the place for the first time, the first label uploaded by other users in the cloud server can be selected, so that the first label of the cloud server can be automatically loaded on a shooting preview interface when shooting and previewing, and the content of a maintenance area beside a landmark building can be automatically removed.
Specifically, the preset processing conditions include at least one of the following:
the depth of field of the first image, the calling frequency of the first label and the feature information of the first image.
The cloud server may automatically process and classify based on feature information of the first image (e.g., the first image belongs to landscape, people, or food, etc.), such as scene depth and user hiding frequency.
As another implementation manner of the present application, in order to improve user experience and implement accurate recommendation of a target recommended tag, before receiving a first input of a target tag by a user, the method may further include the following steps:
determining characteristic information of a first preview picture; determining a target recommended label from label information pre-stored in a cloud server and/or electronic equipment according to the characteristic information; and displaying the target recommendation label.
The electronic equipment can automatically acquire the characteristic information of the first preview picture, determine a target recommendation label from the pre-stored label information according to the characteristic information, and display the target recommendation label in a recommendation area on the first display interface so as to recommend the target recommendation label to a user, so that the user can conveniently select the target label from the target recommendation label.
Exemplarily, when a user takes a picture of a bridge cut-off in the west lake, passersby often pass behind the user, it is determined that the feature information of the first preview picture is 'scenic spot souvenir' and 'many passersby', and based on the historical operation behavior record of the cloud server and/or the electronic device, it is determined that most of the users want to hide passersby appearing in the picture, so that the 'passersby' is determined from the tag information to be displayed on the first display interface as a target recommendation tag, and is recommended to the user, so that the user can conveniently select the target tag.
As another implementation manner of the present application, in order to better meet the selective use requirement of the user on the tag source, before displaying the target recommended tag on the first display interface, the method may further include the following steps:
receiving a seventh input of the user to the third preset control; and responding to the seventh input, and displaying a target recommendation label corresponding to the target label source associated with the seventh input on the first display interface.
Illustratively, the tag source control comprises a cloud server and an electronic device, and as shown in fig. 7, in response to a touch input of a user to the "cloud" control, a target recommended tag stored in the cloud server is displayed on a first display interface. And responding to the touch input of the user to the local control, and displaying the target recommendation label stored in the electronic equipment on the first display interface.
Specifically, as shown in the left diagram of fig. 6, a plurality of tags are displayed on the first display interface in response to a touch input of a user to the "cloud" control, and in response to a touch input of a user to the "trash can" tag, the electronic device automatically identifies a screen area including a trash can, and marks a screen area of a trash can related to the "trash can" tag in the first image, or directly hides the trash can in the first preview screen, and fills the first preview screen with background information, so as to obtain a second preview screen shown in the right diagram of fig. 6.
Next, referring to S220, with the continuous update of the preview screen, it is continuously identified whether the first information associated with the target tag exists in the current first preview screen, and if the first information associated with the target tag exists in the first preview screen, the first information in the first preview screen is hidden, so as to obtain a second preview screen.
In one possible embodiment, in response to a first input, in a case where first information associated with a target tag is recognized from a first preview screen, the first information associated with the target tag is marked on the first preview screen; receiving a second input of the user; and in response to a second input, hiding first information associated with the target label in the first preview screen to obtain a second preview screen.
Specifically, in response to a first input, first information associated with a target label is first marked on a first preview screen in a manner that is outlined by a dashed line or overlaid by a semi-transparent layer. And in response to a second input of the user, hiding the first information associated with the target label in the first preview picture to obtain a second preview picture.
Wherein the second input referred to above may include: the method comprises the steps of inputting a first preset control by a user, inputting a screen display interface of the electronic equipment by the user in a sliding mode or pressing mode, or inputting a preset gesture to the electronic equipment by the user.
Here, the first preview screen referred to above is an image or a shooting preview screen. In response to the second input, the imaged image may be subjected to the concealment process, or the preview screen may be subjected to the concealment process during recording or shooting.
As another implementation manner of the present application, in order to make the second preview screen after the regional hiding processing more natural, and ensure that processing traces are not visible, and the aesthetic feeling of the second preview screen is ensured, the step 220 may specifically include:
in response to a first input, identifying first background information in a first preview screen and first information associated with a target tag; the first information is hidden based on the first background information.
As shown in fig. 7, in response to a first input, first background information and first information associated with a target tag in a first preview screen are recognized, that is, some area screens around the first information in the first preview screen are recognized. The first information is hidden based on some regional pictures (first background information) around the first information, and a second preview picture is obtained. The first information (butterfly) is not visible in the hidden first preview screen, and the area covered by the first information is repaired by filling the first background information (e.g., sky). Therefore, the second preview picture can be more natural, the processing trace can not be seen almost, and the aesthetic feeling of the second preview picture is ensured.
Finally, referring to S230, in order to enhance the user experience and meet various display requirements of the user on the object, after the second preview screen is displayed, the method may further include the following steps:
receiving a third input of a user to a second preset control on the first display interface; and responding to a third input, and updating and displaying the second preview screen as the first preview screen.
Specifically, receiving a third input of a user to a second preset control on the first display interface; and responding to a third input, and displaying a third preview picture, wherein the third preview picture is the first preview picture which does not hide the first information.
According to the use habit of the user, the first information which is the object to be displayed can be selected to be autonomously displayed and displayed on the first display interface.
Specifically, first information associated with the target label is marked on the first preview screen in a manner of being enclosed by a dotted line or being covered by a semi-transparent layer. And if the user does not need to hide the first information, namely the first information is required to be displayed on the first preview picture, clicking the second preset control, responding to a third input of the user to the second preset control, and then not hiding the first information associated with the target label in the first preview picture.
Thus, the method in the embodiment of the present invention displays the object hidden by the first information by hiding the first information (including the privacy information, for example) associated with the target tag in the first preview screen in response to the input of the target tag by the user. The area that the user in the first preview picture that can be convenient fast wants to hide can not influence the whole aesthetic feeling of the second preview picture that obtains through hiding, reaches better display effect, simplifies the work that post processing disturbed the region.
In order to facilitate understanding of the method provided by the embodiment of the present invention, based on the above, the following description will be given by taking the first preview screen as an image and taking the first display interface as an image display interface, and the example is a scenario as shown in fig. 1.
Fig. 8 is a flowchart of a display method based on scene one according to an embodiment of the present invention.
As shown in fig. 8, the manipulation method may include S810-S880, where the method is applied to an electronic device, and is specifically as follows:
s810, receiving touch input of a user to the label control under the condition that the first image is displayed on the first display interface; marking first marking information related to a first label in a first image in response to touch input to the label control; first tag information is stored in association with the first tag.
In order to facilitate a user to quickly process a target object, first label information and a first label having an association relationship may be preset, and when a first image is displayed on a first display interface, an editable mode is performed on a picture area of the first display interface, and first, a touch input of the user to the first label on the first display interface is received. Next, in response to a fourth input, first labeling information associated with the first label is labeled in the first image.
For example, when the user clicks the "trash can" tag, the electronic device automatically identifies the frame region containing the trash can, and marks the frame region of the trash can associated with the "trash can" tag in the first image. Finally, the first tag information is stored in association with the first tag.
In addition, for the first label information, the user can upload the first label according to the needs of the user. The user may also adapt the area of the frame covered by the first marking information.
And S820, uploading the first tags to a cloud server, so that the cloud server can classify the first tags uploaded by the plurality of users according to preset processing conditions.
Specifically, a user can upload and share some common hidden tags, namely first tags, and content characteristic information, namely first mark information, and the cloud server can automatically process and classify according to characteristics (scene depth, user hidden frequency and the like) to mutually help improve the photographing effect. For example, the landscape landmark building is under maintenance and cannot be avoided when taking a picture. When the user processes the image, the hidden label of others can be selected by loading the label of the cloud server, and the content of the label in the maintenance place is automatically removed.
S830, receiving touch input of a user to the label source control; and responding to the touch input of the label source control, and displaying a target recommended label corresponding to a target label source associated with the touch input on a first display interface.
The user can select a target label source by himself, and if the user wants to select a target label from the labels which are commonly used by the user, the user can select a target recommendation label stored in the electronic equipment; if the user wants to see some popular hot tags, the target recommendation tags stored in the cloud server can be selected. And the specific address is used for responding to touch input of a user to the cloud control, and displaying the target recommendation label stored in the cloud server on a first display interface. And responding to the touch input of the user to the local control, and displaying the target recommendation label stored in the electronic equipment on the first display interface.
S840, determining characteristic information of an image to be displayed; determining a target recommended label from label information pre-stored in a cloud server and/or electronic equipment according to the characteristic information; and displaying the target recommended tags on a first display interface for the user to select the target tags from the target recommended tags.
The electronic equipment can automatically acquire the characteristic information of the first preview picture, determine a target recommendation label from the pre-stored label information according to the characteristic information, and display the target recommendation label in a recommendation area on the first display interface so as to recommend the target recommendation label to a user, so that the user can conveniently select the target label from the target recommendation label. In this way, accurate recommendation of the target recommendation tag can be achieved.
S850, receiving a first input of a user to the target label on the first display interface.
In response to a touch input of a user to a label button on the first display interface, a pull-down menu associated with a label is displayed on the first display interface (the pull-down menu comprises a label 1, a label 2 and the like), and a first input of the user to a target label on the first display interface is received.
S860, responding to the first input, identifying first background information in the image to be displayed and first information associated with the target label; and hiding the first information based on the first background information to obtain a first target image.
In response to a first input, first background information in an image to be displayed is identified, that is, some area pictures around the first information in a first preview picture are identified. The first information is hidden based on some regional pictures (first background information) around the first information, and a second preview picture is obtained. Therefore, the second preview picture can be more natural, the processing trace can not be seen almost, and the aesthetic feeling of the second preview picture is ensured.
S870, receiving a touch input to the display control by the user. After S870, S871 may also be included.
And S871, displaying a first target image, wherein the first target image is an image obtained by hiding the first information in the image to be displayed.
According to the using habit of the user, for the picture content (namely, the first information) which the user wants to hide, the hidden effect can be directly checked in response to the touch input of the user to the display control.
And S880, receiving touch input of the user to the display control. Following S880, S881 may also be included.
S881, displaying a second target image, where the second target image is an image to be displayed that does not hide the first information.
The user can select to display the picture content (i.e., the first information) to be displayed autonomously according to the usage habit of the user.
In order to facilitate understanding of the method provided by the embodiment of the present invention, based on the above, the following description will take the first preview screen as a preview video, and take the case of the scene two as an example.
And S910, recording starts.
S920, receiving a first input of a user to a target label on the video preview display interface.
S930, recognizing the first image frame, and determining a hidden area and a background area corresponding to the first input.
And S940, filling and repairing the hidden area based on the background area to obtain a repaired first image frame.
For example, if the user does not want to disturb the shooting of the fast car, the user selects the target tag of the car, and detects each frame of picture in real time during the shooting, and after the corresponding object is identified, the preview interface of the electronic device automatically filters the car.
S950, filling and repairing the hidden area of the second image frame based on the background area of the first image frame to obtain a repaired second image frame, wherein the imaging time of the first image frame is earlier than that of the second image frame.
That is, when processing the next frame image, automatic filling with the background of the previous frame is continued.
And S960, obtaining the target video according to the repaired first image frame and the repaired second image frame.
And S970, finishing recording to obtain the target video.
Therefore, after the corresponding object is identified on the preview interface, the corresponding object can be filtered during previewing, and more frame information of previous time is stored in the video, so that the display effect is good when regional filling is carried out according to the background information.
Fig. 10 is a schematic structural diagram of a display device according to an embodiment of the present invention.
As shown in fig. 10, the display device 100 may specifically include:
a receiving module 1001, configured to receive a first input of a target tag on a first display interface from a user.
The hiding module 1002 is configured to, in response to a first input, hide the first information in the first preview screen to obtain a second preview screen in a case where the first information associated with the target tag is identified from the first preview screen. And a display module 1003 configured to display the second preview screen.
In one possible embodiment, the hiding module 1002 includes: a first marking module.
And the first marking module is used for marking the first information associated with the target label on the first preview screen under the condition that the first information associated with the target label is identified from the first preview screen in response to the first input.
The receiving module 1001 is further configured to receive a second input from the user.
The hiding module 1002 is specifically configured to hide, in response to a second input, first information associated with a target tag in the first preview screen to obtain a second preview screen.
In a possible embodiment, the receiving module 1001 is further configured to receive a third input from the user.
Accordingly, the apparatus 100 further comprises an update module. And the updating module is used for responding to a third input and updating and displaying the second preview picture as the first preview picture.
In this case, various display requirements of the user on the object are met, and user experience can be improved.
In a possible embodiment, the receiving module 1001 is further configured to receive a fourth input of the first label on the first display interface from the user if the first display interface includes the first image.
Correspondingly, the apparatus 100 further comprises a second marking module and a first storage module.
A second labeling module to label first labeling information associated with the first label in the first image in response to a fourth input.
And the first storage module is used for storing the first mark information and the first label in a correlation mode.
Here, by presetting the first tag information and the first label having the association relationship, a user can conveniently and quickly process a target object, and the richness of the first label can be improved.
In a possible embodiment, the receiving module 1001 is further configured to: in a case where the first display interface includes the first image, a fifth input is received that the user selected the first label information from the first image.
The display module 1003 is further configured to display an editable label associated with the first mark information in response to the fifth input.
The receiving module 1001 is further configured to receive a first tag input by a user in an editable tag.
The apparatus 100 further comprises a second storage module for storing the first tag information in association with the first tag.
Therefore, the accuracy of the picture content represented by the first label can be improved, and a user can quickly think of the corresponding picture when seeing the label.
In a possible embodiment, the apparatus 100 further comprises: the uploading module is used for uploading the first tags to the cloud server so that the cloud server can classify the first tags uploaded by the plurality of users according to preset processing conditions; the preset treatment conditions include at least one of the following: the depth of field of the first image, the calling frequency of the first label and the feature information of the first image.
Here, the user can have more labels to select, and when the user is provided with the labels to be selected, the operation behavior habit of the user is better met.
In a possible embodiment, the apparatus 100 further comprises a determining module for determining the feature information of the first preview screen.
The determining module is further used for determining the target recommended label from label information pre-stored in the cloud server and/or the device according to the characteristic information.
Accordingly, the display module 1003 is configured to display the target recommendation tag.
The target recommendation label is determined according to the feature information of the first preview picture, so that accurate recommendation of the target recommendation label can be achieved, and user experience is improved.
In summary, the display device in the embodiment of the present invention displays the second preview screen after hiding the first information (e.g. the privacy information) associated with the target tag in the first preview screen by hiding the first information in response to the input of the target tag by the user. The user that can be convenient fast in the first preview picture of hiding wants the region of hiding, can not influence the whole aesthetic feeling of the second preview picture after hiding, reaches better display effect, simplifies the work of post processing.
Fig. 11 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
The terminal device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, processor 1110, and power supply 1111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 11 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 1107 is configured to receive a first input of the target tag from the user.
A processor 1110, configured to hide the first information in the first preview screen to obtain a second preview screen in response to the first input in a case where the first information associated with the target tag is identified from the first preview screen.
A display unit 1106 displays the second preview screen. Receiving a first input of a user on a target label on a first display interface, wherein the first display interface comprises a first preview picture; hiding first information associated with the target label in the first preview picture in response to the first input to obtain a second preview picture; and displaying a second preview screen.
According to the embodiment of the invention, by hiding first information (such as privacy information) associated with the target tag in the first preview screen in response to the input of the target tag by a user, the second preview screen after hiding the first information is displayed. The user that can be convenient fast in the first preview picture of hiding wants the region of hiding, can not influence the whole aesthetic feeling of the second preview picture after hiding, reaches better display effect, simplifies the work of post processing.
In the embodiment of the present invention, the radio frequency unit 1101 may be configured to receive and transmit signals during a message transmission or communication process, specifically, receive downlink resources from a base station and then process the received downlink resources to the processor 1110; in addition, the uplink resource is transmitted to the base station. In general, radio frequency unit 1101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1101 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 1102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 1103 may convert an audio resource received by the radio frequency unit 1101 or the network module 1102 or stored in the memory 1109 into an audio signal and output as sound. Also, the audio output unit 1103 can also provide audio output related to a specific function performed by the terminal device 1100 (e.g., a call signal reception sound, a message reception sound, and the like). The audio output unit 1103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1104 is used to receive audio or video signals. The input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image resources of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frame may be displayed on the display unit 1107. The image frames processed by the graphic processor 11041 may be stored in the memory 1109 (or other storage medium) or transmitted via the radio frequency unit 1101 or the network module 1102. The microphone 11042 may receive sound and may be capable of processing such sound into an audio asset. The processed audio resources may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1101 in case of the phone call mode.
Terminal device 1100 also includes at least one sensor 1105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 11061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 11061 and/or the backlight when the terminal device 1100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 1105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., and will not be described in detail herein.
The display unit 1106 is used to display information input by a user or information provided to the user. The Display unit 1106 may include a Display panel 11061, and the Display panel 11061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-emitting diode (OLED), or the like.
The user input unit 1107 is operable to receive input numeric or character information and generate key signal inputs relating to user settings and function control of the terminal device. Specifically, the user input unit 1107 includes a touch panel 11071 and other input devices 11072. The touch panel 11071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 11071 (e.g., operations by a user on or near the touch panel 11071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 11071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1110, and receives and executes commands sent from the processor 1110. In addition, the touch panel 11071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1107 may include other input devices 11072 in addition to the touch panel 11071. In particular, the other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 11071 can be overlaid on the display panel 11061, and when the touch panel 11071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1110 to determine the type of the touch event, and then the processor 1110 provides a corresponding visual output on the display panel 11061 according to the type of the touch event. Although the touch panel 11071 and the display panel 11061 are shown in fig. 11 as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 11071 and the display panel 11061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 1108 is an interface for connecting an external device to the terminal apparatus 1100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless resource port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 1108 may be used to receive input (e.g., resource information, power, etc.) from external devices and transmit the received input to one or more elements within terminal apparatus 1100 or may be used to transmit resources between terminal apparatus 1100 and external devices.
The memory 1109 may be used to store software programs and various resources. The memory 1109 may mainly include a storage program area and a storage resource area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage resource area may store resources (such as audio resources, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 1109 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions and processing resources of the terminal device by running or executing software programs and/or modules stored in the memory 1109 and calling resources stored in the memory 1109, thereby integrally monitoring the terminal device. Processor 1110 may include one or more processing units; preferably, the processor 1110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
Terminal device 1100 can also include a power supply 1111 (e.g., a battery) for providing power to various components, and preferably, power supply 1111 can be logically coupled to processor 1110 via a power management system such that functions such as managing charging, discharging, and power consumption are performed via the power management system.
In addition, the terminal device 1100 includes some functional modules that are not shown, and are not described in detail herein.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which, when executed in a computer, causes the computer to perform the steps of the image processing method of an embodiment of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. A display method applied to electronic equipment is characterized by comprising the following steps:
receiving a first input of a target label from a user;
in response to the first input, in the case that first information associated with the target tag is identified from a first preview screen, hiding the first information in the first preview screen, and obtaining a second preview screen;
and displaying the second preview picture.
2. The method of claim 1, wherein hiding the first information in the first preview screen if the first information associated with the target tag is identified from the first preview screen in response to the first input, resulting in a second preview screen, comprises:
in response to the first input, in a case where first information associated with the target tag is recognized from a first preview screen, marking the first information associated with the target tag on the first preview screen;
receiving a second input of the user;
and in response to the second input, hiding first information associated with the target label in the first preview screen to obtain a second preview screen.
3. The method of claim 1, wherein after said displaying the second preview screen, the method further comprises:
receiving a third input by the user;
and responding to the third input, and updating and displaying the second preview screen as the first preview screen.
4. The method of claim 1, wherein prior to said receiving a first user input of a target tag, the method further comprises:
receiving a fourth input of the first label on the first display interface by the user under the condition that the first display interface comprises the first image;
marking first marking information associated with the first label in the first image in response to the fourth input;
storing the first tag information in association with the first tag.
5. The method of claim 1, wherein prior to said receiving a first user input of a target tag, the method further comprises:
receiving a fifth input that the user selects first marker information from the first image in the case that the first display interface comprises the first image;
in response to the fifth input, displaying an editable label associated with the first label information;
receiving a first label input by the user on the editable label;
storing the first tag information in association with the first tag.
6. The method of claim 4 or 5, wherein after said storing said first tag information in association with said first tag, said method further comprises:
uploading the first tags to a cloud server, so that the cloud server can classify the first tags uploaded by a plurality of users according to preset processing conditions; the preset treatment condition comprises at least one of the following conditions:
the depth of field of the first image, the calling frequency of the first tag, and the feature information of the first image.
7. The method of claim 1, wherein prior to said receiving a first user input of a target tag, the method further comprises:
determining characteristic information of the first preview picture;
determining a target recommended label from label information pre-stored in a cloud server and/or electronic equipment according to the characteristic information;
and displaying the target recommendation label.
8. A display device, comprising:
the receiving module is used for receiving a first input of a user to the target label;
a hiding module, configured to, in response to the first input, hide first information associated with the target tag from a first preview screen to obtain a second preview screen when the first information is identified from the first preview screen;
and the display module is used for displaying the second preview picture.
9. The apparatus of claim 8, wherein the concealment module comprises: a first marking module;
the first marking module is used for marking the first information associated with the target label on a first preview screen when the first information associated with the target label is identified from the first preview screen in response to the first input;
the receiving module is further used for receiving a second input of the user;
the hiding module is specifically configured to hide, in response to the second input, first information associated with the target tag in the first preview screen to obtain the second preview screen.
10. The apparatus of claim 8, wherein the receiving module is further configured to receive a third input from the user;
the apparatus also includes an update module;
and the updating module is used for responding to the third input and updating and displaying the second preview picture as the first preview picture.
11. The apparatus of claim 8, wherein the receiving module is further configured to receive a fourth input from the user to the first tab on the first display interface if the first display interface includes the first image;
the device also comprises a second marking module and a first storage module;
the second marking module is configured to mark first marking information associated with the first label in the first image in response to the fourth input;
the first storage module is used for storing the first mark information and the first label in a correlation mode.
12. The apparatus of claim 8, wherein the receiving module is further configured to receive a fifth input that the user selects first tag information from the first image if the first display interface includes the first image;
the display module is further configured to display an editable label associated with the first label information in response to the fifth input;
the receiving module is further used for receiving a first label input by the user on the editable label;
the apparatus further comprises a second storage module for storing the first label information in association with the first label.
13. The apparatus according to claim 11 or 12, further comprising an uploading module, configured to upload the first tag to a cloud server, so that the cloud server performs classification processing on the first tag uploaded by multiple users according to a preset processing condition;
the preset treatment condition comprises at least one of the following conditions: the depth of field of the first image, the calling frequency of the first tag, and the feature information of the first image.
14. The apparatus according to claim 8, further comprising a determination module configured to determine feature information of the first preview screen;
the determining module is further used for determining a target recommended label from label information pre-stored in a cloud server and/or a device according to the characteristic information;
the display module is used for displaying the target recommendation label.
15. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the display method according to any one of claims 1 to 7.
16. A computer-readable storage medium, on which a program or instructions are stored, which, when executed by a processor, carry out the steps of the display method according to any one of claims 1 to 7.
CN202010469598.4A 2020-05-28 2020-05-28 Display method and device and electronic equipment Pending CN111752450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010469598.4A CN111752450A (en) 2020-05-28 2020-05-28 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010469598.4A CN111752450A (en) 2020-05-28 2020-05-28 Display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111752450A true CN111752450A (en) 2020-10-09

Family

ID=72673689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010469598.4A Pending CN111752450A (en) 2020-05-28 2020-05-28 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111752450A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528326A (en) * 2020-12-09 2021-03-19 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN112995506A (en) * 2021-02-09 2021-06-18 维沃移动通信(杭州)有限公司 Display control method, display control device, electronic device, and medium
CN113067983A (en) * 2021-03-29 2021-07-02 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and storage medium
CN113806002A (en) * 2021-09-24 2021-12-17 维沃移动通信有限公司 Image display method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282031A (en) * 2014-09-19 2015-01-14 广州三星通信技术研究有限公司 Method and device for processing picture to be output and terminal
CN104657991A (en) * 2015-02-06 2015-05-27 深圳市金立通信设备有限公司 Picture processing method
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
WO2018023212A1 (en) * 2016-07-30 2018-02-08 华为技术有限公司 Image recognition method and terminal
CN107820013A (en) * 2017-11-24 2018-03-20 上海创功通讯技术有限公司 A kind of photographic method and terminal
CN108924418A (en) * 2018-07-02 2018-11-30 珠海市魅族科技有限公司 A kind for the treatment of method and apparatus of preview image, terminal, readable storage medium storing program for executing
CN109492635A (en) * 2018-09-20 2019-03-19 第四范式(北京)技术有限公司 Obtain method, apparatus, equipment and the storage medium of labeled data
WO2019146942A1 (en) * 2018-01-26 2019-08-01 삼성전자주식회사 Electronic apparatus and control method thereof
CN111191606A (en) * 2019-12-31 2020-05-22 Oppo广东移动通信有限公司 Image processing method and related product

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282031A (en) * 2014-09-19 2015-01-14 广州三星通信技术研究有限公司 Method and device for processing picture to be output and terminal
CN104657991A (en) * 2015-02-06 2015-05-27 深圳市金立通信设备有限公司 Picture processing method
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
WO2018023212A1 (en) * 2016-07-30 2018-02-08 华为技术有限公司 Image recognition method and terminal
CN107820013A (en) * 2017-11-24 2018-03-20 上海创功通讯技术有限公司 A kind of photographic method and terminal
WO2019146942A1 (en) * 2018-01-26 2019-08-01 삼성전자주식회사 Electronic apparatus and control method thereof
CN108924418A (en) * 2018-07-02 2018-11-30 珠海市魅族科技有限公司 A kind for the treatment of method and apparatus of preview image, terminal, readable storage medium storing program for executing
CN109492635A (en) * 2018-09-20 2019-03-19 第四范式(北京)技术有限公司 Obtain method, apparatus, equipment and the storage medium of labeled data
CN111191606A (en) * 2019-12-31 2020-05-22 Oppo广东移动通信有限公司 Image processing method and related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ORIGINOS: "vivo AI抠图", 《HTTPS://V.QQ.COM/X/PAGE/B09594HCJV7.HTML》 *
啾啾啾的皮卡车: "vivo手机自带的人像抠图功能,一键抠人像换背景,从此证件照也可以自己搞定!", 《HTTPS://WWW.BILIBILI.COM/VIDEO/AV967989733/》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528326A (en) * 2020-12-09 2021-03-19 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN112528326B (en) * 2020-12-09 2024-01-02 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN112995506A (en) * 2021-02-09 2021-06-18 维沃移动通信(杭州)有限公司 Display control method, display control device, electronic device, and medium
WO2022171057A1 (en) * 2021-02-09 2022-08-18 维沃移动通信(杭州)有限公司 Display control method and apparatus, and electronic device and medium
CN112995506B (en) * 2021-02-09 2023-02-07 维沃移动通信(杭州)有限公司 Display control method, display control device, electronic device, and medium
CN113067983A (en) * 2021-03-29 2021-07-02 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and storage medium
CN113806002A (en) * 2021-09-24 2021-12-17 维沃移动通信有限公司 Image display method and device

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN107817939B (en) Image processing method and mobile terminal
CN108495029B (en) Photographing method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN111010610B (en) Video screenshot method and electronic equipment
CN109005286B (en) Display control method and folding screen terminal
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN108174103B (en) Shooting prompting method and mobile terminal
CN111752450A (en) Display method and device and electronic equipment
CN110602565A (en) Image processing method and electronic equipment
CN110970003A (en) Screen brightness adjusting method and device, electronic equipment and storage medium
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN108459788B (en) Picture display method and terminal
CN109495616B (en) Photographing method and terminal equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110650367A (en) Video processing method, electronic device, and medium
CN109246351B (en) Composition method and terminal equipment
CN108174109B (en) Photographing method and mobile terminal
CN110798621A (en) Image processing method and electronic equipment
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN110913261A (en) Multimedia file generation method and electronic equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN111125800B (en) Icon display method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN109639981B (en) Image shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201009

RJ01 Rejection of invention patent application after publication