CN112492206B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112492206B
CN112492206B CN202011372114.0A CN202011372114A CN112492206B CN 112492206 B CN112492206 B CN 112492206B CN 202011372114 A CN202011372114 A CN 202011372114A CN 112492206 B CN112492206 B CN 112492206B
Authority
CN
China
Prior art keywords
image
input
target
area
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011372114.0A
Other languages
Chinese (zh)
Other versions
CN112492206A (en
Inventor
李亚韦
黄鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202011372114.0A priority Critical patent/CN112492206B/en
Publication of CN112492206A publication Critical patent/CN112492206A/en
Application granted granted Critical
Publication of CN112492206B publication Critical patent/CN112492206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: displaying a plurality of label information in a first preset area in a shooting preview interface; receiving a first input, and determining target label information according to the first input; receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image; wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information. According to the embodiment of the application, the efficiency of acquiring the images when the user needs to use a plurality of images can be improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
With the popularization of electronic devices, the functions of the electronic devices are more and more complete. The electronic equipment is provided with a folder, and a user can store the photographed image in the folder. In the prior art, when a user needs to use a plurality of images, for example, when the user needs to publish the plurality of images to a social network site, the user needs to open a folder, browse the images in the folder, and select the images from the folder one by one, which is inefficient.
Disclosure of Invention
The embodiment of the application provides an image processing method and device and electronic equipment, and can solve the problem that in the prior art, when a user needs to use multiple images, the image acquisition efficiency is low.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
displaying a plurality of label information in a first preset area in a shooting preview interface;
receiving a first input, and determining target label information according to the first input;
receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first display module is used for displaying a plurality of label information in a first preset area in a shooting preview interface;
the first determining module is used for receiving a first input and determining target label information according to the first input;
the generating module is used for receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps in the image processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps in the image processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, in a shooting preview interface, a plurality of label information are displayed in a first preset area; receiving a first input, and determining target label information according to the first input; receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image. Therefore, the label information corresponding to each image can be determined in the photographing process, the image is filled in the image filling area according to the label information, the first template for filling the image can be automatically generated, and therefore a user can quickly acquire a desired image through the first template, and the efficiency is high.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
fig. 2 is one of schematic interface display diagrams of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is one of the schematic diagrams of a first template provided by the embodiments of the present application;
FIG. 4 is a second schematic diagram of a first template provided in an embodiment of the present application;
FIG. 5 is a third schematic view of a first template according to an embodiment of the present application;
FIG. 6 is a fourth illustration of a first template provided by an embodiment of the present application;
FIG. 7 is a fifth schematic view of a first template provided by an embodiment of the present application;
fig. 8 is a second schematic view of an interface display of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 11 is a third schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a fourth schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step 101, displaying a plurality of label information in a first preset area in a shooting preview interface.
The shooting preview interface may be an interface for displaying a shooting preview image. Illustratively, the shooting preview interface may be an interface entered after clicking a shooting button. The first preset area may be any area in the shooting preview interface. The plurality of tag information may include: feature information of beautiful scenery, handsome guy, sand carving, animals, lovely and beautiful girls and the like; alternatively, the tag information may further include: time information such as No. 1/month 5, No. 1/month 6, No. 1/month 7, and the like; alternatively, the tag information may also include personal name information such as plumes, twigs, and flies, and the tag information is not limited in this embodiment. As shown in fig. 2, tab information may be displayed in a preview area of the photographing preview interface: animals, landscapes, sand sculptures, and loveliness.
In addition, a classified photographing button can be added to the photographing preview interface, and the classified photographing mode can be turned on or turned off through the classified photographing button, for example, the classified photographing mode can be turned on by clicking the classified photographing button, and the classified photographing mode can be turned off by clicking the classified photographing button again. The method comprises the steps that under the condition that a classified photographing mode is turned on, a plurality of label information can be displayed in a first preset area; and under the condition that the classified photographing mode is closed, the display of the plurality of label information in the first preset area is cancelled. The shooting preview interface can also display shooting function buttons of normal shooting, portrait, night scene, large aperture, AR shooting, panoramic view, slow motion and the like. As shown in fig. 2, a category modification button "+ category" may be displayed on the shooting preview interface, the category modification button may be clicked, a delete button may be displayed at a position where the tag information is located for deleting the tag information, a tag information addition box may be displayed on the shooting preview interface, and the tag information may be input by an input method or voice.
Step 102, receiving a first input, and determining target label information according to the first input.
Wherein the target tag information may be tag information of the plurality of tag information. The target tag information may be one of the plurality of tag information. The first input may be an input for selecting target tag information from the plurality of tag information, and may be, for example, an operation for clicking the target tag information; or may be an operation of double-clicking the target tag information; or may also be an operation of sliding on the target tag information according to a preset gesture. The embodiment does not limit the specific expression of the first input.
103, receiving a second input, acquiring a first image, generating a first template according to the target label information, and filling a first image filling area of the first template with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
In addition, the second input may be an input for taking a picture, and may be, for example, an operation of clicking a picture taking button. The captured first image may be associated with the target tag information. The first image may be used to fill a first image fill area, generating a first template. The first template may be a template with a regular shape, for example, a template in the form of a grid, or may be a template with an irregular shape, which is not limited in this embodiment. The image filling area may be in a grid shape, a polygonal shape, a circular shape, or an irregular shape, which is not limited in this embodiment. Illustratively, the first template may be a nine-square grid, or may be a six-square grid, or may be a three-square grid, and so on. The template style of the first template may be preset before the electronic device leaves the factory, or may be manually set by a user after the electronic device leaves the factory.
Optionally, as shown in fig. 3, the first template includes nine grids, which include seven image filling areas, and the blank area is a grid without label information. As shown in fig. 4, the first template includes seven grids including five image fill areas. As shown in fig. 5, the first template includes six cells including five image fill areas.
It should be noted that, all the other image filling areas except the first image filling area in the first template may be filled with images. The stored image may be filled into other image filling areas of the first template except the first image filling area, and an image corresponding to the label information corresponding to each image filling area may be filled into each image filling area, respectively, to generate the first template. The electronic device may store a plurality of images. Each image may correspond to one tag information. The image stored by the electronic device may be a photographed image or may be an image received through an application program. The tag information may be tag information set for an image at the time of photographing, or may also be tag information set for an image after photographing, or may also be tag information set for an image received through an application, and so on.
In addition, the label information may be used to classify the image, and may be classified by image features, for example, the label information may include: beautiful scenery, handsome guy, sand carving, animals, lovely and beautiful girls, etc.; alternatively, the classification may be performed according to the shooting time of the image, and for example, the tag information may include: month 1, 5, month 1, 6, month 1, 7, etc.; alternatively, the classification may be by name, for example, by the name of a person in the image.
Further, the image may be filled into the image filling area according to the label information, and the label information corresponding to the image filling area may be the same as the label information of the image filled in the image filling area. As an embodiment, the first template may be a template in a long-strip grid pattern, as shown in fig. 6, there is only one grid in each row, two grids are in a group, one grid in the group is used for placing an image, and another grid in the group is used for text description. The number of the grid groups can be set arbitrarily. The image shooting date can be selected as a classification basis in the long-strip grid mode, for example, the label information corresponding to the first grid can be No. 1/month 5, the label information corresponding to the second grid can be No. 1/month 6, the label information corresponding to the third grid can be No. 1/month 7, and the image can be automatically filled into the image filling area according to the image shooting date.
As another embodiment, the first template may be a template in a four-grid pattern, and as shown in fig. 7, the tag information corresponding to each grid in the template in the four-grid pattern may be a person name, and an image may be filled into each grid according to the person name. In practical application, photos of more people are stored in an album of the electronic device, tag information can be set for each photo, the tag information can be a name of the people, and when a user needs to publish an image to a social network site, the user can select the tag information to be a template style of the name of the people, so that the photos can be automatically filled to generate a first template, and the first template can be published to the social network site. Furthermore, hidden comment characters can be added to the image filled in each grid, and when the image is clicked, the hidden comment characters corresponding to the image can be displayed.
It should be noted that, images having the same label information as the image filling area may be filled into the image filling area, and if there are a plurality of images having the same label information, one image may be randomly selected from the images to be filled into the image filling area, or one image with the closest photographing time may be selected to be filled into the image filling area; if there is no image of the same label information, the image fill area may be set blank. The generated first template may be used for posting the social networking site, or the first template may be stored for calling the next time it is used. When the first template is displayed, the image filled in the image filling area may be displayed in the form of a thumbnail image in the image filling area. The user can double click the image filling area, and the image filled in the image filling area can be displayed in a normal display mode, so that the user can browse the image conveniently.
In practical application, taking the first template as a template in the form of a grid as an example, a user can edit the grid, and tag information is added in the grid as a background. When the grid is clicked, the tag information is displayed, and the display of the tag information is cancelled after a preset time length, wherein the preset time length can be 0.5s or 1s or 2s, and the like. One label information can be correspondingly displayed on each grid, or for grids with the same label information, the label information can be displayed on a plurality of grids in a dividing mode.
It should be noted that, in the case where the user is not satisfied with the image in the generated first template, the user may modify the image in the first template. As an embodiment, the first template is filled with a third image, and the image processing method may further include: and receiving an eighth input of a third image in the first template, and replacing the images except the third image in the first template in response to the eighth input, wherein the images after replacement and the images before replacement are images with the same label information. The eighth input may be an input selecting a third image, which may be one or more images. As another embodiment, the label information corresponding to the image filling area may be displayed by pressing the image filling area in the first template for a long time, and the label information corresponding to the image filling area may be edited, so that the classification corresponding to the image filling area may be replaced.
For example, if the user is satisfied with the third image only, the user may double-click the third image, lock the third image, and click the refresh button, so that the remaining images except the third image may be replaced by one key. If the user is not satisfied with all the images in the first template, the user can click the refresh button to replace all the images in the first template, and the images after replacement and the images before replacement are images with the same label information.
The label information is displayed on the shooting preview interface, so that a user can select the label information during shooting, the shot images are classified, and the shot images can be flexibly classified; the image obtained by photographing is filled into the corresponding image filling area according to the label information set during photographing, a first template is generated, a user can conveniently check the image, the first template can be published to the social network site, and the user can conveniently and quickly publish the image to the social network site according to the self requirement.
In the embodiment of the application, in a shooting preview interface, a plurality of label information are displayed in a first preset area; receiving a first input, and determining target label information according to the first input; receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image. Therefore, the label information corresponding to each image can be determined in the photographing process, the image is filled in the image filling area according to the label information, the first template for filling the image can be automatically generated, and therefore a user can quickly acquire a desired image through the first template, and the efficiency is high.
Optionally, before the acquiring the first image, the method further includes:
displaying a plurality of priority information in a second preset area in the shooting preview interface;
receiving a third input, and determining target priority information according to the third input;
the generating a first template according to the target label information includes:
generating a first template according to the target label information and the target priority information;
wherein each of the image fill areas corresponds to one of the plurality of priority information, and the first image fill area corresponds to the target priority information.
The second preset area may be any area in the shooting preview interface. The first image acquired may be associated with the target priority information. The third input may be an input of selecting target priority information from the plurality of priority information, and may be, for example, an operation of clicking the target priority information; or may be an operation of double-clicking the target priority information; or may also be an operation of sliding on the target priority information according to a preset gesture. The embodiment does not limit the specific expression of the third input. The plurality of priority information may include: a first priority, a second priority, a third priority, and the like; or may further include: high, medium, and low priority, etc. The present embodiment does not limit the specific expression form of the priority information.
In practical applications, as shown in fig. 2, a high button, a middle button and a low button may be displayed on the shooting preview interface, the high button, the middle button and the low button are used for setting the priority of the photographed image, and after one of the high button, the middle button and the low button is selected, the selected button is highlighted.
It should be noted that the first template may be generated according to the priority information and the tag information at the same time. An image having the same label information and the same priority information as the image filling area may be filled in the image filling area, generating a first template. If a plurality of images with the same label information and the same priority information exist, one image can be randomly selected from the images to be filled into the image filling area, or one image with the closest photographing time can be selected to be filled into the image filling area; if there is no image having the same label information and having the same priority information, the image fill area may be set blank.
In the embodiment, the image is filled in the image filling area according to the label information and the priority information, and the first template is generated, so that the image can be automatically called according to the classification of the image and the preference degree of the user to the image to generate the first template, the user can conveniently check the image, the generated first template can be published to the social network site, the interestingness is high, and the user experience is good.
Optionally, before the acquiring the first image, the method further includes:
in the shooting preview interface, displaying a frame selection area in an image preview area;
receiving a fourth input, and adjusting the image framed in the framing area according to the fourth input;
after the acquiring the first image, the method further comprises:
and in the shooting preview interface, displaying a thumbnail of the first image in a third preset area, wherein the thumbnail of the first image is the image framed in the framing area.
Wherein the fourth input may be used to adjust the position and/or size of the frame selection area. Exemplarily, the fourth input may be an operation of moving the frame selection area; alternatively, the fourth input may be an operation of increasing the frame selection area; or the fourth input may be an operation of reducing the frame selection area, and the fourth input is not limited in this embodiment.
Note that the frame selection area may be displayed as shown in fig. 8 in a case where the classification photographing mode is turned on. The size of the frame selection area can be adjusted, and the position of the frame selection area can be moved in the preview area of the shooting preview interface, so that the frame selection image of the frame selection area can be adjusted.
In this embodiment, the image framed by the frame selection area is adjusted according to the fourth input, and in the shooting preview interface, the thumbnail of the first image is displayed in a third preset area, and the thumbnail of the first image is the image framed by the frame selection area, so that a user can adopt the frame selection area to frame a partial area of the shooting preview image to be displayed as a thumbnail, and thus the important display content of the acquired image can be known through the thumbnail.
Optionally, after generating the first template according to the target tag information, the method further includes:
receiving a fifth input to the first image in the first image fill area;
in response to the fifth input, displaying at least one second image, wherein the label information corresponding to the second image is the same as the label information corresponding to the first image;
receiving a sixth input to a target image of the at least one second image;
in response to the sixth input, replacing the first image in the first image fill area with the target image.
The fifth input may be an operation of clicking the first image, or may be an input of double-clicking the first image, or may also be an input of sliding on the first image according to a preset gesture, and so on, and a specific expression form of the fifth input is not limited in this embodiment. The sixth input may be an input for selecting a target image from the at least one second image, for example, the sixth input may be an operation for clicking the target image, or may be an input for double-clicking the target image, or may also be an input for sliding on the target image according to a preset gesture, and the like, and a specific expression form of the sixth input is not limited in this embodiment. Taking the first template as a nine-square case as an example, if the user is not satisfied with the image in a certain square case, the user can click the square case to manually select the image.
In this embodiment, by displaying at least one second image and replacing the first image in the first image filling area with the target image, when the image in the first template needs to be replaced, the same kind of image can be provided for the user to select, and the image is replaced based on the selection of the user, so that the flexibility is high.
Optionally, the generating a first template according to the target tag information includes:
receiving a seventh input;
and responding to the seventh input, identifying a target scene corresponding to the seventh input, and generating a first template corresponding to the target scene according to the target label information.
Wherein the seventh input may be an operation to enter an album folder from a social application; or may be an operation of clicking a preset button within the social application; or may also be an operation of sliding within the social application program according to a preset gesture, or may also be an operation of opening an album folder, or the like, and the seventh input is not limited in this embodiment. Different template styles can be correspondingly set for different scenes, so that the generated first template is different. For example, the first template generated when the album folder is opened may be different from the first template generated when the album folder is entered from a social application.
In addition, the identifying a target scene corresponding to the seventh input may include: identifying a target social application program corresponding to the seventh input, the first template corresponding to the target social application program. Different template styles can be correspondingly set for different social application programs, and different first templates can be generated. The seventh input is directed to a different social application, and the first template is different.
In this embodiment, the target scene corresponding to the seventh input is identified, and the first template corresponding to the target scene is generated according to the target tag information, so that when the image is published to the social network site, the image is automatically filled in the template style corresponding to the social network site preset by the user, the first template is generated, and the image can be rapidly published to the social network site. A
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. The image processing apparatus provided in the embodiment of the present application is described with a method for executing a loading image process by an image processing apparatus as an example.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus 200 includes:
the first display module 201 is configured to display a plurality of label information in a first preset area in a shooting preview interface;
a first determining module 202, configured to receive a first input, and determine target tag information according to the first input;
the generating module 203 is configured to receive a second input, acquire a first image, and generate a first template according to the target tag information, where a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
In the embodiment of the application, a first display module displays a plurality of label information in a first preset area in a shooting preview interface; the method comprises the steps that a first determining module receives first input and determines target label information according to the first input; the generation module receives a second input, acquires a first image, and generates a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image; wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information. Therefore, the label information corresponding to each image can be determined in the photographing process, the image is filled in the image filling area according to the label information, the first template for filling the image can be automatically generated, and therefore a user can quickly acquire a desired image through the first template, and the efficiency is high.
Optionally, as shown in fig. 10, the apparatus 200 further includes:
the second display module 204 is configured to display a plurality of priority information in a second preset area in the shooting preview interface;
a second determining module 205, configured to receive a third input, and determine target priority information according to the third input;
the generating module 203 is specifically configured to:
generating a first template according to the target label information and the target priority information;
wherein each of the image fill areas corresponds to one of the plurality of priority information, and the first image fill area corresponds to the target priority information.
Optionally, as shown in fig. 11, the apparatus 200 further includes:
a third display module 206, configured to display a frame selection area in an image preview area in the shooting preview interface;
an adjusting module 207, configured to receive a fourth input, and adjust the image framed by the framing area according to the fourth input;
a fourth display module 208, configured to display a thumbnail of the first image in a third preset area in the shooting preview interface, where the thumbnail of the first image is an image framed in the frame selection area.
Optionally, as shown in fig. 12, the apparatus 200 further includes:
a first receiving module 209, configured to receive a fifth input to the first image in the first image filling area;
a fifth display module 210, configured to display at least one second image in response to the fifth input, where tag information corresponding to the second image is the same as tag information corresponding to the first image;
a second receiving module 211, configured to receive a sixth input to a target image in the at least one second image;
a replacing module 212, configured to replace the first image in the first image filling area with the target image in response to the sixth input.
Optionally, the generating module 203 is specifically configured to: receiving a second input, acquiring a first image, receiving a seventh input, responding to the seventh input, identifying a target scene corresponding to the seventh input, and generating a first template corresponding to the target scene according to the target label information.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 13, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein the display unit 406 is configured to: displaying a plurality of label information in a first preset area in a shooting preview interface;
the processor 410 is configured to: receiving a first input, and determining target label information according to the first input;
the processor 410 is further configured to: receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
In the embodiment of the application, a display unit displays a plurality of label information in a first preset area in a shooting preview interface; the processor receives a first input, and determines target label information according to the first input; the processor receives a second input, acquires a first image, and generates a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image; wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information. Therefore, the label information corresponding to each image can be determined in the photographing process, the image is filled in the image filling area according to the label information, the first template for filling the image can be automatically generated, and therefore a user can quickly acquire a desired image through the first template, and the efficiency is high.
Optionally, the display unit 406 is further configured to: displaying a plurality of priority information in a second preset area in the shooting preview interface;
the user input unit 407 is configured to: receiving a third input, the processor 410 is further configured to: determining target priority information according to the third input;
the processor 410 is further configured to: generating a first template according to the target label information and the target priority information;
wherein each of the image fill areas corresponds to one of the plurality of priority information, and the first image fill area corresponds to the target priority information.
Optionally, the display unit 406 is further configured to: in the shooting preview interface, displaying a frame selection area in an image preview area;
the user input unit 407 is configured to: receiving a fourth input, the processor 410 is further configured to: adjusting the image framed in the framing area according to the fourth input;
the display unit 406 is further configured to: and in the shooting preview interface, displaying a thumbnail of the first image in a third preset area, wherein the thumbnail of the first image is the image framed in the framing area.
Optionally, the user input unit 407 is configured to: receiving a fifth input to the first image in the first image fill area;
the display unit 406 is further configured to: in response to the fifth input, displaying at least one second image, wherein the label information corresponding to the second image is the same as the label information corresponding to the first image;
the user input unit 407 is further configured to: receiving a sixth input to a target image of the at least one second image;
the processor 410 is further configured to: in response to the sixth input, replacing the first image in the first image fill area with the target image.
Optionally, the user input unit 407 is further configured to: receiving a seventh input;
the processor 410 is further configured to: and responding to the seventh input, identifying a target scene corresponding to the seventh input, and generating a first template corresponding to the target scene according to the target label information.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
displaying a plurality of label information in a first preset area in a shooting preview interface;
receiving a first input, and determining target label information according to the first input;
receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
2. The method of claim 1, wherein prior to said acquiring the first image, the method further comprises:
displaying a plurality of priority information in a second preset area in the shooting preview interface;
receiving a third input, and determining target priority information according to the third input;
the generating a first template according to the target label information includes:
generating a first template according to the target label information and the target priority information;
wherein each of the image fill areas corresponds to one of the plurality of priority information, and the first image fill area corresponds to the target priority information.
3. The method of claim 1, wherein prior to said acquiring the first image, the method further comprises:
in the shooting preview interface, displaying a frame selection area in an image preview area;
receiving a fourth input, and adjusting the image framed in the framing area according to the fourth input;
after the acquiring the first image, the method further comprises:
and in the shooting preview interface, displaying a thumbnail of the first image in a third preset area, wherein the thumbnail of the first image is the image framed in the framing area.
4. The method of claim 1, wherein after generating the first template according to the target tag information, the method further comprises:
receiving a fifth input to the first image in the first image fill area;
in response to the fifth input, displaying at least one second image, wherein the label information corresponding to the second image is the same as the label information corresponding to the first image;
receiving a sixth input to a target image of the at least one second image;
in response to the sixth input, replacing the first image in the first image fill area with the target image.
5. The method of claim 1, wherein generating the first template from the target tag information comprises:
receiving a seventh input;
and responding to the seventh input, identifying a target scene corresponding to the seventh input, and generating a first template corresponding to the target scene according to the target label information.
6. An image processing apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying a plurality of label information in a first preset area in a shooting preview interface;
the first determining module is used for receiving a first input and determining target label information according to the first input;
the generating module is used for receiving a second input, acquiring a first image, and generating a first template according to the target label information, wherein a first image filling area of the first template is filled with the first image;
wherein the first template has a plurality of image filling areas, each of the plurality of image filling areas corresponds to one of the plurality of label information, the plurality of image filling areas includes the first image filling area, and the first image filling area corresponds to the target label information.
7. The apparatus of claim 6, further comprising:
the second display module is used for displaying a plurality of priority information in a second preset area in the shooting preview interface;
the second determining module is used for receiving a third input and determining target priority information according to the third input;
the generation module is specifically configured to:
generating a first template according to the target label information and the target priority information;
wherein each of the image fill areas corresponds to one of the plurality of priority information, and the first image fill area corresponds to the target priority information.
8. The apparatus of claim 6, further comprising:
the third display module is used for displaying a frame selection area in an image preview area in the shooting preview interface;
the adjusting module is used for receiving a fourth input and adjusting the image framed in the framing area according to the fourth input;
and the fourth display module is used for displaying the thumbnail of the first image in a third preset area in the shooting preview interface, wherein the thumbnail of the first image is the image framed in the frame selection area.
9. The apparatus of claim 6, further comprising:
a first receiving module for receiving a fifth input to the first image in the first image fill area;
a fifth display module, configured to display at least one second image in response to the fifth input, where tag information corresponding to the second image is the same as tag information corresponding to the first image;
a second receiving module, configured to receive a sixth input of a target image in the at least one second image;
a replacement module to replace the first image in the first image fill area with the target image in response to the sixth input.
10. The apparatus of claim 6, wherein the generation module is specifically configured to: receiving a second input, acquiring a first image, receiving a seventh input, responding to the seventh input, identifying a target scene corresponding to the seventh input, and generating a first template corresponding to the target scene according to the target label information.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 5.
CN202011372114.0A 2020-11-30 2020-11-30 Image processing method and device and electronic equipment Active CN112492206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011372114.0A CN112492206B (en) 2020-11-30 2020-11-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011372114.0A CN112492206B (en) 2020-11-30 2020-11-30 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112492206A CN112492206A (en) 2021-03-12
CN112492206B true CN112492206B (en) 2021-10-26

Family

ID=74937150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011372114.0A Active CN112492206B (en) 2020-11-30 2020-11-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112492206B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103085A (en) * 2013-04-11 2014-10-15 三星电子株式会社 Objects in screen images
CN106155508A (en) * 2015-04-01 2016-11-23 腾讯科技(上海)有限公司 A kind of information processing method and client
CN109063562A (en) * 2018-06-29 2018-12-21 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109379542A (en) * 2018-10-23 2019-02-22 深圳豪客互联网有限公司 A kind of shooting picture joining method, device and computer readable storage medium
CN109862267A (en) * 2019-01-31 2019-06-07 维沃移动通信有限公司 A kind of image pickup method and terminal device
CN110784652A (en) * 2019-11-15 2020-02-11 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN111050076A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Shooting processing method and electronic equipment
CN111813929A (en) * 2020-05-27 2020-10-23 维沃移动通信有限公司 Information processing method and device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110016150A1 (en) * 2009-07-20 2011-01-20 Engstroem Jimmy System and method for tagging multiple digital images
JP6494504B2 (en) * 2015-12-25 2019-04-03 キヤノン株式会社 Information processing apparatus, control method, and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103085A (en) * 2013-04-11 2014-10-15 三星电子株式会社 Objects in screen images
CN106155508A (en) * 2015-04-01 2016-11-23 腾讯科技(上海)有限公司 A kind of information processing method and client
CN109063562A (en) * 2018-06-29 2018-12-21 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109379542A (en) * 2018-10-23 2019-02-22 深圳豪客互联网有限公司 A kind of shooting picture joining method, device and computer readable storage medium
CN109862267A (en) * 2019-01-31 2019-06-07 维沃移动通信有限公司 A kind of image pickup method and terminal device
CN110784652A (en) * 2019-11-15 2020-02-11 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN111050076A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Shooting processing method and electronic equipment
CN111813929A (en) * 2020-05-27 2020-10-23 维沃移动通信有限公司 Information processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN112492206A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
KR20140098009A (en) Method and system for creating a context based camera collage
CN113079316B (en) Image processing method, image processing device and electronic equipment
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN112449110B (en) Image processing method and device and electronic equipment
CN112287141A (en) Photo album processing method and device, electronic equipment and storage medium
CN112698761A (en) Image display method and device and electronic equipment
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN111885298B (en) Image processing method and device
CN113194256A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112162805B (en) Screenshot method and device and electronic equipment
CN113271378A (en) Image processing method and device and electronic equipment
CN112685119A (en) Display control method and device and electronic equipment
CN112492206B (en) Image processing method and device and electronic equipment
CN114443567A (en) Multimedia file management method, device, electronic equipment and medium
CN114390205A (en) Shooting method and device and electronic equipment
CN113779293A (en) Image downloading method, device, electronic equipment and medium
CN114584704A (en) Shooting method and device and electronic equipment
CN113360684A (en) Picture management method and device and electronic equipment
CN113139367A (en) Document generation method and device and electronic equipment
CN113873080B (en) Multimedia file acquisition method and device
CN111966642B (en) Picture management method and device and electronic equipment
CN116074459A (en) File generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant