CN111124231A - Picture generation method and electronic equipment - Google Patents

Picture generation method and electronic equipment Download PDF

Info

Publication number
CN111124231A
CN111124231A CN201911370283.8A CN201911370283A CN111124231A CN 111124231 A CN111124231 A CN 111124231A CN 201911370283 A CN201911370283 A CN 201911370283A CN 111124231 A CN111124231 A CN 111124231A
Authority
CN
China
Prior art keywords
image
input
picture
images
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911370283.8A
Other languages
Chinese (zh)
Other versions
CN111124231B (en
Inventor
胡双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911370283.8A priority Critical patent/CN111124231B/en
Publication of CN111124231A publication Critical patent/CN111124231A/en
Application granted granted Critical
Publication of CN111124231B publication Critical patent/CN111124231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a picture generation method and electronic equipment, relates to the technical field of image processing, and aims to solve the problem that picture display effect is poor due to manual picture adjustment of a user. The method comprises the following steps: respectively displaying an identifier in each of N areas of a first picture, wherein each area corresponds to M images, each image in the M images is an image in different pictures in M pictures, the identifier in one area is used for indicating that the M images corresponding to the one area are different, the display areas of the M images corresponding to the one area in different pictures correspond to the one area, N is a positive integer, and M is an integer greater than 1; receiving a first input of the N identifiers; in response to the first input, a target picture is generated from N first images and a second image, one of the first images being one of different images corresponding to one of the regions, the second image being an image of the other region of the first picture except the N regions.

Description

Picture generation method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a picture generation method and electronic equipment.
Background
With the development of mobile communication technology, electronic devices generally have a shooting function, and shooting pictures by using electronic devices has become an indispensable part of people's daily life.
Generally, after a user takes a plurality of pictures with high similarity, the user can select one picture with good shooting effect from the plurality of pictures with high similarity. If the local area of a picture selected by the user does not meet the user's mind, the user may adjust the picture, for example, the user may adjust the pattern shape in the picture through a cropping application program in the electronic device. However, since most users may lack the skill of repairing the image, the adjusted image may have problems such as distortion, and the image display effect is poor.
Disclosure of Invention
The embodiment of the invention provides a picture generation method and electronic equipment, and aims to solve the problem that picture display effect is poor due to manual picture adjustment of a user.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a picture generation method. The method is applied to the electronic equipment, and the method can comprise the following steps: respectively displaying an identifier in each of N areas of a first picture, wherein each area corresponds to M images, each image in the M images is an image in different pictures in M pictures, the identifier in one area is used for indicating that the M images corresponding to the one area are different, the display areas of the M images corresponding to the one area in different pictures correspond to the one area, N is a positive integer, and M is an integer greater than 1; receiving a first input of the N identifiers; and responding to the first input, and generating target pictures according to N first images and second images, wherein one first image is an image in different images corresponding to one area, and the second image is an image in other areas except the N areas in the first picture.
In a second aspect, an embodiment of the present invention provides an electronic device. The electronic equipment comprises a display module, a receiving module and a processing module. The display module is used for respectively displaying an identifier in each of N areas of a first picture, each area corresponds to M images, each image in the M images is an image in a different picture in M pictures, the identifier in one area is used for indicating that the M images corresponding to the one area are different, the display areas of the M images corresponding to the one area in different pictures correspond to the one area, N is a positive integer, and M is an integer greater than 1; the receiving module is used for receiving first input of the N identifications displayed by the display module; and the processing module is used for responding to the first input received by the receiving module and generating a target picture according to the N first images and the second image, wherein one first image is one of different images corresponding to one area, and the second image is the image of the other area except the N areas in the first picture.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the electronic device implements the steps of the picture generation method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the picture generation method as in the first aspect described above.
In the embodiment of the present invention, the electronic device may respectively display an identifier in each of N regions of a first picture, each region corresponds to M images, each image in the M images is respectively an image in a different picture of M pictures, the identifier in one region is used to indicate that the M images corresponding to the one region are different, and the display regions of the M images corresponding to the one region in different pictures all correspond to the one region, N is a positive integer, and M is an integer greater than 1; then, the electronic device may receive a first input of the N identifiers, and generate, in response to the first input, a target picture according to the N first images and a second image, where one first image is one of different images corresponding to one region, and the second image is an image of another region of the first picture except for the N regions. By means of the scheme, the electronic device can display the N marks for indicating that the images in the areas of the M images have differences, so that a user can select one image from different images corresponding to each area through inputting the N marks, for example, the user can use the image which is considered by the user to have the best shooting effect as the first image of one area, and therefore a target image synthesized by the N first images and the second image is probably the image which best meets the requirements of the user, namely the image provided by the embodiment of the invention has a better display effect.
Drawings
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a picture generation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of N regions of M pictures according to an embodiment of the present invention;
FIG. 4 is a schematic interface diagram of an electronic device displaying N identifiers and at least two elements according to an embodiment of the present invention;
fig. 5 is a schematic interface diagram of an electronic device displaying a preview picture according to an embodiment of the present invention;
FIG. 6 is a schematic interface diagram of a combined image of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of S pictures spliced by the electronic device according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the described embodiments without making any inventive step, fall within the scope of protection of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image generation method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system. For example, in the embodiment of the present invention, the electronic device may specifically display N identifiers in N areas of a first picture of M pictures through some application programs, and generate the target picture in response to a first input of the N identifiers by a user.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
Generally, an application program may include two parts, one part refers to content displayed on a screen of an electronic device, for example, the electronic device displays N identifiers in N areas of a first picture of M pictures; the other part refers to a service (service) running in the background of the electronic device, and is used for detecting input of a user for the application program and responding to the input to execute corresponding actions, for example, responding to the first input of the N identifications and generating a target picture according to the N first images and the second image.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image generation method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the image generation method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a terminal device. The terminal device may be a mobile terminal device or a non-mobile terminal device. For example, the mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
An execution main body of the picture generation method provided by the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the picture generation method in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily describe the picture generation method provided by the embodiment of the present invention.
Generally, after a user takes a plurality of pictures of a certain shooting object through an electronic device, one picture with a good shooting effect can be selected from the plurality of pictures. In one scenario, a user can share the picture with a good shooting effect to other users through the electronic device. In another scenario, the user may delete the other pictures except the picture with the better shooting effect from the electronic device, so as to save the storage space of the electronic device.
In the embodiment of the present invention, the electronic device may respectively display an identifier in each of N regions of a first picture, each region corresponds to M images, each image in the M images is respectively an image in a different picture of M pictures, the identifier in one region is used to indicate that the M images corresponding to the one region are different, and the display regions of the M images corresponding to the one region in different pictures all correspond to the one region, N is a positive integer, and M is an integer greater than 1; then, the electronic device may receive a first input of the N identifiers, and generate, in response to the first input, a target picture according to the N first images and a second image, where one first image is one of different images corresponding to one region, and the second image is an image of another region of the first picture except for the N regions. By means of the scheme, the electronic device can display the N marks for indicating that the images in the areas of the M images have differences, so that a user can select one image from different images corresponding to each area through inputting the N marks, for example, the user can use the image which is considered by the user to have the best shooting effect as the first image of one area, and therefore a target image synthesized by the N first images and the second image is probably the image which best meets the requirements of the user, namely the image provided by the embodiment of the invention has a better display effect.
The following describes an example of a picture generation method and an electronic device according to an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides a picture generation method, which may be applied to an electronic device. The method may include S200 to S202 described below.
S200, the electronic equipment displays one identifier in each of the N areas of the first picture.
In an embodiment of the present invention, each of the N regions may correspond to M images. Each of the M images corresponding to each region may be an image in a different one of the M pictures. The identifier in one region is used for indicating that the M images corresponding to the one region are different, and the display regions of the M images corresponding to the one region in different pictures correspond to the one region. N is a positive integer, and M is an integer greater than 1.
Optionally, the M pictures may be M pictures with similarity greater than or equal to a preset threshold in the electronic device, that is, the M pictures may be pictures with higher similarity in the electronic device.
The first picture may be any one of M pictures.
Optionally, before the step S200, an image generating method provided in an embodiment of the present invention may further include: the electronic device determines M pictures. Specifically, in a possible implementation manner, the M pictures may be pictures that are selected by the electronic device from a storage space of the electronic device and triggered by the user, for example, the electronic device may select, in response to a received input of the user, M pictures with a similarity greater than or equal to a preset threshold from the storage space of the electronic device, or the electronic device may determine the M pictures according to a selection input of the user on the M pictures. Another possible implementation manner is that the M pictures may be a group of pictures taken by the user through the electronic device at the same time, for example, multiple pictures taken by the user using the electronic device at the same time period, the same place, and the same subject. In another possible implementation manner, the M pictures may be M pictures, which are received by the electronic device from electronic devices of other users and have similarity greater than or equal to a preset threshold.
Taking M pictures as an example, the electronic device determines M pictures according to the selection input of the user on the M pictures, and the sources of the M pictures are exemplarily described below.
The electronic device can respond to a trigger operation of a user or detect whether the pictures in the storage space of the electronic device contain a plurality of pictures with similarity higher than a preset threshold value according to a preset period. For example, the plurality of pictures with the similarity higher than the preset threshold may be a plurality of pictures taken by the user using the electronic device at the same time period, the same place and the same photographic subject.
If the electronic device does not detect that the pictures in the storage space of the electronic device include a plurality of pictures with the similarity higher than the preset threshold, the electronic device may wait for the next trigger operation of the user or the next detection period of the electronic device, and detect whether the pictures in the storage space of the electronic device include a plurality of pictures with the similarity higher than the preset threshold.
If the electronic device detects that the pictures in the storage space of the electronic device contain a plurality of pictures with similarity higher than a preset threshold, the electronic device can prompt the user whether to process the plurality of pictures or not in a voice or text mode and the like. If the electronic device receives the confirmation input of the user, the electronic device may display a picture selection interface, where the picture selection interface may include at least the plurality of pictures. The user can select and input M pictures in the multiple pictures, so that the electronic equipment is triggered to confirm the pictures selected by the user as the M pictures.
Optionally, the first picture may be any one of the M pictures. It can be understood that, since the similarity of the M pictures is high, and M images of other regions except for the N regions in the M pictures are the same, the electronic device may determine any one of the M pictures as the first picture.
Exemplarily, as shown in fig. 3, taking an example that M pictures include a picture 1 shown in (a) in fig. 3 and a picture 2 shown in (b) in fig. 3, since images of other areas except for the area indicated by 31 and the area indicated by 32 are the same in the picture 1 and the picture 2, the electronic device may determine the picture 1 as the first picture or the picture 2 as the first picture.
Optionally, each of the N regions may be a region in which images of corresponding regions in the M pictures are different. For example, with continued reference to fig. 3, the N regions may include two regions, a 1 st region including a region indicated by 31 in picture 1 and a region indicated by 31 in picture 2, and a 2 nd region including a region indicated by 32 in picture 1 and a region indicated by 32 in picture 2.
Optionally, the N identifiers may be displayed according to a first display mode. Specifically, the N identifiers may be N regions displayed according to preset colors; the character identifiers can also be displayed on the N areas according to preset colors and preset transparency; the first icon may be displayed on the N regions in a preset color, a preset shape, a preset transparency, and a preset blinking manner.
For example, taking the first picture as picture 1 shown in (a) in fig. 3 as an example, the electronic device may highlight the region indicated by 31 and the region indicated by 32 in picture 1; alternatively, the electronic device may respectively display a circle mark on the area indicated by 31 and the area indicated by 32 in the picture 1 in a floating manner; still alternatively, the electronic device may display a text label of "different region" in a floating manner on the region indicated by 31 and the region indicated by 32 in the picture 1. Besides the above display forms, the N identifiers may have other possible display forms, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
When N is an integer greater than or equal to 2, the N identifiers may be completely the same, may be partially the same, or may be completely different. For example, the electronic device may respectively display a text label of "different region" in a floating manner on the region indicated by 31 and the region indicated by 32 in the picture 1; it is also possible to display a text label "different area 1" in a floating manner on the area indicated by 31 and a text label "different area 2" in a floating manner on the area indicated by 32.
Optionally, the electronic device may display a logo in the ith area of the first picture. For example, taking M as 3, M pictures include picture 1, picture 2, and picture 3, and the first picture is picture 1 as an example, the ith region of picture 1 may include image 1, the region of picture 2 corresponding to the ith region may include image 2, and the region of picture 3 corresponding to the ith region may include image 3. If there are different images in the images 1, 2, and 3, that is, 3 images are different images, or one image in the 3 images is different from the other two images, the electronic device may display a mark in the ith area of the image 1.
It should be noted that, for a specific implementation manner of displaying the identifier in the other areas except for the ith area in the N areas, reference may be made to the specific implementation manner of displaying the identifier in the ith area by the electronic device, which is not described herein again.
Optionally, under the condition that the shapes and sizes of the M pictures are the same, the electronic device may compare each pixel point of the M pictures in sequence. If the pixel points of the corresponding region in the M pictures are different, the electronic device may determine that at least two different images exist in the M images of the region. After comparing all the pixel points of the M pictures, the electronic device may determine the N regions. Under the condition that the shapes of the M pictures are different, the electronic equipment can firstly identify the target objects of the M pictures and then compare pixel points of the target objects of the M pictures, if the pixel points of the target objects of the M pictures are different, the electronic equipment can determine that at least two different images exist in the M images of the area where the target object is located, and after all the target objects of the M pictures are compared, the electronic equipment can determine the N areas.
For example, as shown in fig. 3, still taking the picture shown in (a) in fig. 3 as the picture 1 and the picture shown in (b) in fig. 3 as the picture 2 as an example, the electronic device may determine, by sequentially comparing each pixel point of the pictures 1 and 2, that the region indicated by 31 and the region indicated by 32 are regions with different pixel points in the pictures 1 and 2. Or the deciduous leaves and the bears in the pictures 1 and 2 may be identified first, and then the pixel points in the areas where the deciduous leaves are located in the pictures 1 and 2 and the pixel points in the areas where the bears are located in the pictures 1 and 2 may be compared, so as to determine that the areas indicated by the areas 31 and the areas indicated by the areas 32 are different areas where the pixel points in the pictures 1 and 2 are different.
S201, the electronic equipment receives first input of the N identifications.
Optionally, the first input may be N inputs of N identifiers by the user, that is, one identifier corresponds to one input, or one input of N identifiers by the user, or x inputs of x identifiers by the user and 1 input of (N-x) identifiers by the user. Wherein x is a positive integer less than N.
S202, the electronic equipment responds to the first input, and generates a target picture according to the N first images and the N second images.
In an embodiment of the present invention, each first image may be one of different images corresponding to one region. For example, still taking the picture shown in (a) in fig. 3 as picture 1 and the picture shown in (b) in fig. 3 as picture 2 as an example, as shown in fig. 3, the image corresponding to the region indicated by 31 includes the image indicated by 31 in picture 1 and the image indicated by 31 in picture 2, and the first image in the region indicated by 31 may be the image indicated by 31 in picture 1 or the image indicated by 31 in picture 2.
In an embodiment of the invention, the second image may be an image of an area other than the N areas in the first picture. For example, as shown in fig. 3, taking the first picture as picture 1 in (a) of fig. 3 as an example, the second image may be an image other than the region indicated by 31 and the region indicated by 32 in picture 1.
The target picture may be a picture generated by combining the N first images and the second image. For example, still taking the picture shown in (a) in fig. 3 as picture 1, the picture shown in (b) in fig. 3 as picture 2, and the picture 1 as the first picture as an example, as shown in fig. 3, if the first image of the region indicated by 31 is the image indicated by 31 in picture 1, the first image of the region indicated by 32 is the image indicated by 32 in picture 2, and the second image is the image of picture 1 except for the region indicated by 31 and the region indicated by 32, the electronic device may generate the target picture, that is, the picture shown in (c) in fig. 3, by combining the image indicated by 31 in picture 1, the image indicated by 32 in picture 2, and the image of picture 1 except for the region indicated by 31 and the region indicated by 32.
After the electronic device displays N identifiers in N regions of the first picture, the user may perform a first input on the N identifiers. After the electronic device receives a first input of a user, the electronic device may determine N first images and second images in response to the first input, and generate a target picture from the N first images and the second images.
Optionally, after the electronic device generates the target picture, the picture generation method provided in the embodiment of the present invention may further include: the electronic equipment processes the target picture. In one scene, a user can share a target picture with other users through the electronic equipment, and in another scene, the user can delete M pictures, so that the storage space of the electronic equipment is saved. Of course, after the electronic device generates the target picture, the electronic device may further perform other processing on the target picture, which may be specifically determined according to actual usage requirements.
The embodiment of the invention provides a picture generation method, wherein an electronic device can display N identifiers for indicating that images in regions of M pictures have differences, so that a user can select one image from different images corresponding to each region through inputting the N identifiers, for example, the user can use the image which is considered by the user to have the best shooting effect as a first image of one region, and a target picture synthesized by the N first images and a second image is probably the picture which is most suitable for the requirements of the user, namely, the picture provided by the embodiment of the invention has a better display effect.
Optionally, different first inputs may correspond to different implementation manners of generating a target picture by an electronic device, and the implementation manner of generating the target picture in the embodiment of the present invention may include any one of the following three implementation manners:
implementation mode 1
In implementation 1, the first input may include N first sub-inputs and N second sub-inputs. In particular, the first input may be N inputs identified by the user for N, and each of the N inputs may include a first sub-input and a second sub-input. For each of the N regions, the electronic device may determine a first image corresponding to each region in a manner provided in S202A through S202D described below. After determining the N first images, the electronic device may generate a target picture from the N first images and the second image. The above S201 to S202 may be specifically realized by S201A, S202A, S201B, S202B, and S202C described below, wherein the above S201 may be specifically realized by S201A and S201B described below, and the above S202 may be specifically realized by S202A, S202B, and S202C described below.
S201A, the electronic device receives an ith first sub-input of the identification of the ith area.
Wherein i is a positive integer less than or equal to N.
Optionally, the first sub-input may be a touch input of the user to the identifier of the ith area. The touch input may be a preset number of clicks, a long press, a slide in different directions, and other possible inputs.
For example, the first sub-input may be a single click input by the user on the identification of the ith region. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S202A, the electronic device responds to the ith first sub-input and displays at least two elements.
Wherein each of the at least two elements may be used to indicate one of different images corresponding to the ith region.
For example, the at least two elements may be different images corresponding to the ith area, thumbnails of the different images corresponding to the ith area, and image identifiers of the different images corresponding to the ith area. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S201B, the electronic device receives an i-th second sub-input to a first element of the at least two elements.
Optionally, the second sub-input may be a touch input of the first element by the user, where the touch input may be a preset number of clicks, a long press, a slide in different directions, and other possible inputs.
For example, the second sub-input may be a single click input by the user on the first element. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S202B, the electronic device responds to the ith second sub-input, determines one image indicated by the first element as a first image corresponding to the ith area, and updates the identification of the ith area from the first display mode to the second display mode.
Optionally, the second display mode is a display mode different from the first display mode in S201. For example, if the identifier of the ith area is a circular icon, if the first display mode is to display the circular icon in green, the second display mode may be to display the circular icon in blue, that is, the electronic device may update the circular icon displayed in green to the circular icon displayed in blue.
The mark displayed in the second display mode may be used to indicate that the user has performed at least one selection input on the image of the area indicated by the mark, and then, if the user is not satisfied with the selected image, the user may perform the selection input on the image of the area again.
Optionally, after the user performs selection input on the image of the ith area again, the electronic device may display the identifier of the ith area in another display mode different from the first display mode and the second display mode. For example, the electronic device may display a numeric indicator superimposed on the indicator of the ith area, where the numeric indicator may be used to indicate the number of times the user reselects the first image corresponding to the ith area.
S202C, after determining the N first images, the electronic device generates a target picture from the N first images and the second image.
Optionally, before the step S202C, the method for generating a picture according to the embodiment of the present invention may further include: the electronic device determines a second image. The determination time of the second image is not particularly limited in the embodiment of the present invention. For example, the electronic device may determine the second image after S200 and before S201. For another example, the electronic device may determine the second image after S202B and before S203C.
Illustratively, N is 2. As shown in fig. 4 (a), the electronic device may randomly select one picture from M pictures as the first picture 41. Thereafter, the electronic device may display the first picture 41 with the logo 1 in a 1 st area 42 of the first picture 41 and the logo 2 in a 2 nd area 43 of the first picture 41. The user may first perform a click input on the identifier 1, and if the 1 st region of the M pictures includes 4 different images, the electronic device may display 4 elements, that is, an element 11, an element 12, an element 13, and an element 14, as shown in (b) of fig. 4, in response to the click input on the identifier 1, where each element corresponds to one image of the 4 different images of the 1 st region. After that, the user may click on an element 11, i.e., a first element, of the 4 elements, so that the electronic device may determine, in response to a click input to the element 11, the image indicated by the element 11 as the first image corresponding to the 1 st area, and update the identifier 1 from the first display mode to the second display mode. Then, the user may continue to click and input the identifier 2, and the specific implementation manner of determining, by the electronic device, the first image corresponding to the 2 nd area may refer to the specific implementation manner of determining, by the electronic device, the first image corresponding to the 1 st area, which is not described herein again.
In the embodiment of the present invention, the electronic device may sequentially determine, according to a first input of a user, the first image corresponding to each of the N regions, so that the electronic device generates the target picture according to the N first images and the second image. By the scheme, the user can freely combine the images corresponding to the N areas according to personal preferences, so that the satisfaction degree of the user on generating the pictures by the electronic equipment can be improved.
Optionally, in a case that one element is a thumbnail of one image, if the user cannot clearly see the difference between the images through the thumbnail when determining the first image, a second input may be performed on a second element electronic device in at least two elements displayed by the electronic device, so as to trigger the electronic device to cancel displaying the thumbnail of the image indicated by the second element and display an image indicated by the second element.
For example, in implementation mode 1 above, before the electronic device receives the second sub-input after displaying the at least two elements, the picture generation method provided by the embodiment of the present invention may further include S203 to S204 described below.
S203, the electronic device receives a second input of a second element of the at least two elements.
Optionally, the second element may be all of the at least two elements, that is, a user may sequentially input each of the at least two elements; or may be a part of at least two elements, that is, the user may randomly select a part of at least two elements for input.
Optionally, the second input may be a touch input of the user to the second element. The touch input may be a click, a long press, a swipe, or the like. For example, the second input may be a single click input by the user on the second element. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S204, the electronic device responds to the second input and displays one image indicated by the second element.
Alternatively, the one image may be an image having a display ratio of 100%, or an image having a display ratio larger than that of the thumbnail of the one image. For example, if the display ratio of one image in one area in the first picture is 100% and the display ratio of the thumbnail of the one image in the first picture is 30%, the electronic device may display the image indicated by the second element with one display ratio larger than 30% in response to the second input.
Alternatively, the one image may be an image with an adjustable display scale. For example, the electronic device may display an image at a first scale in response to the second input; after the electronic device receives a user zoom input for the image, the electronic device may adjust the scale of the image to a second scale, i.e., the electronic device may display the image at the second scale.
In the embodiment of the invention, in the case that one element is a thumbnail of one image, the electronic device may display one image indicated by the second element in response to a second input of the user, so that the user may see one image with a display scale larger than that of the thumbnail, thereby facilitating the user to determine the first image from the images indicated by at least two elements.
Optionally, after displaying the at least two elements, if the user wants to view an effect of one image indicated by the third element in the whole picture, the user may perform a third input on the third element of the at least two elements, so as to trigger the electronic device to display a preview picture including the image corresponding to the third element.
For example, in implementation mode 1 above, before the electronic device receives the second sub-input after displaying the at least two elements, the picture generation method provided by the embodiment of the present invention may further include S205 to S206 described below.
S205, the electronic device receives a third input of a third element of the at least two elements.
Optionally, the third element may be all of the at least two elements, that is, the user may sequentially perform the third input on each of the at least two elements, or may be a part of the at least two elements, that is, the user may randomly select a part of the at least two elements to perform the third input.
Optionally, the third input is an input different from the second input in S203, where the third input may be a touch input of the third element by the user, and the touch input may be a click, a long press, a slide, or the like. For example, where the second input is a single-click input by the user to the second element, the third input may be a double-click input by the user to the third element. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
And S206, the electronic equipment responds to the third input, generates a preview picture according to the image indicated by the third element and the third image, and displays the preview picture.
The third image may be an image of a region other than the ith region in the first picture. The third element is one of at least two elements corresponding to the identifier displayed in the ith area.
Illustratively, as shown in (a) in fig. 5, the ith area is an area displaying the identifier 1, at least two elements of the area include an element 11, an element 12, an element 13 and an element 14, and taking the third element as the element 12 as an example, after the electronic device receives a third input of the element 12 from the user, as shown in (b) in fig. 5, the electronic device may generate a preview picture according to the image 51 indicated by the element 12 and the third image 52 (other than the image 51), and display the preview picture.
In the embodiment of the invention, since the electronic device can generate and display a preview picture in response to the third input of the user, the user can see the overall effect of one image indicated by the third element in the complete picture, thereby facilitating the user to determine the first image from the images indicated by the at least two elements.
Implementation mode 2
In implementation 2, when the first input is one input of N identifiers by the user, S202 may be implemented by S202D described below.
S202D, the electronic device responds to the first input, different images corresponding to the N areas are combined to obtain S groups of images, and image splicing is respectively carried out on each group of images and the second image to generate S pictures including the target picture.
Each group of images in the S pictures comprises N images corresponding to the N areas, and the N images are images provided by the N areas. S is a positive integer greater than N.
Optionally, in implementation 2, the first input may be one input of the N identifiers by the user. Specifically, the first input may be a touch input, a voice input, a gesture input, or the like. For example, the touch input may be a click input or a long press input of a user on a "composition" control displayed by the electronic device, or the like. The "combine" control can be used to trigger the electronic device to combine multiple different images.
For example, as shown in fig. 6 (a), in a case where each of the N regions of the first picture displays one identifier, the electronic device may further display a combination control 61, where the combination control 61 may be used to trigger the electronic device to combine a plurality of different images of the N regions. The user may make a first input to the "combine" control, causing the electronic device to receive the first input.
In the embodiment of the invention, the electronic device can select one image from different images corresponding to each region, so as to obtain S groups of images, wherein each group of images in the S groups of images is not completely identical to other groups of images.
In the embodiment of the present invention, after the electronic device obtains S groups of images, each group of images and the second image (i.e., the images of the first image except the images of the N regions) may be respectively subjected to image stitching. Specifically, each image in a group of images is from a different region of M pictures, and the electronic device may determine the splicing manner of the image and the second image according to the position of each image in the M pictures.
Illustratively, as shown in fig. 6, taking N as 2 as an example, that is, N regions are 2 regions, and the 2 regions are a region including the identifier 1 (hereinafter referred to as region 1) and a region including the identifier 2 (hereinafter referred to as region 2), respectively, where the region 1 may include 3 elements, respectively, the element 11, the element 12, and the element 13, and the region 2 may include 2 elements, respectively, the element 21 and the element 22. The electronic device displays the identifier 1 in the area 1 of the first picture, and after displaying the identifier 2 in the area 2 of the first picture, if the user considers that the area is small and the number of elements corresponding to the area is small, the user can perform a touch input on the combination control 61. After the electronic device receives the touch input, the electronic device may combine the images indicated by the different elements corresponding to the area 1 and the area 2 in response to the touch input, so that 6 sets of images may be obtained, namely (the image 11 indicated by the element 11, the image 21 indicated by the element 21), (the image 12 indicated by the element 12, the image 21 indicated by the element 21), (the image 13 indicated by the element 13, the image 21 indicated by the element 21), (the image 11 indicated by the element 11, the image 22 indicated by the element 22), (the image 12 indicated by the element 12, the image 22 indicated by the element 22), (the image 13 indicated by the element 13, and the image 22 indicated by the element 22). Then, the electronic device may perform image stitching on each set of images and the second image (i.e., the other images except for the image of the area 1 and the image of the area 2 in the first picture) respectively, thereby generating 6 pictures as shown in (b) of fig. 6. As shown in fig. 7, each of the 6 pictures includes a group of images, and the 6 pictures may include a target picture.
Optionally, the electronic device may combine different images corresponding to the N regions through triggering by the user, or may automatically combine different images corresponding to the N regions. For example, if the number of identifiers displayed by the electronic device is less than or equal to a first threshold, and the number of elements corresponding to each identifier is less than or equal to a second threshold, where the first threshold and the second threshold may be the same or different, the electronic device may combine different images corresponding to N regions to obtain S groups of images, and generate S groups of pictures.
In the embodiment of the invention, the electronic equipment can generate S pictures including the target picture through the first input of the user, so that the user can see the effect of combining the images of the N areas on the whole, and further the user can select one image which is considered by the user to have the best effect from the combined images.
Optionally, in implementation 2 above, after the electronic device generates S pictures, the user may select one picture from the S pictures as the target picture. The electronic device may store the target picture and delete pictures other than the target picture from the S pictures in response to a fourth input of the user to the target picture.
For example, after the above S202D, the picture generating method provided by the embodiment of the present invention may further include the following S207 and S208.
And S207, the electronic equipment receives a fourth input of the target picture.
After the electronic device obtains S pictures, the electronic device may display the S pictures. After the user previews the pictures in the S pictures, the user may perform a fourth input on the target picture, for example, the user may click the target picture or drag the target picture to the target display area, so that the electronic device may receive the fourth input.
Optionally, the electronic device may display thumbnails of S pictures; one of the S pictures may also be displayed, and then after receiving a switching input from the user, the electronic device may display another one of the S pictures. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
And S208, the electronic equipment responds to the fourth input, stores the target picture and deletes the pictures except the target picture in the S pictures.
Optionally, the electronic device may store the target picture in a preset storage area of the electronic device; or, the electronic device may store the target picture in a storage area designated by the user, for example, a current interface of the electronic device may include a plurality of display areas, each display area may correspond to one storage area, and if the user drags the target picture to a target display area of the plurality of display areas, the electronic device may store the target picture in the storage area corresponding to the target display area.
Optionally, the method for generating an image according to the embodiment of the present invention may further include: and deleting the M pictures, or keeping the M pictures.
In the embodiment of the invention, after the electronic equipment generates S pictures, the user can select the target picture from the S pictures, on one hand, the electronic equipment can store the pictures satisfied by the user, thereby completing the combination of the images in the M pictures; on the other hand, by deleting the pictures except the target picture from the S pictures, the storage space of the electronic equipment can be saved.
Implementation mode 3
Implementation 3 may be a combined use of implementation 1 and implementation 2. Specifically, under the condition that the number of the regions is large or the number of the images corresponding to the regions is large, the electronic device may first determine the first image corresponding to the partial regions in the N regions through the implementation mode 1, and then combine the images corresponding to the remaining regions in the N regions through the implementation mode 2, or the electronic device may first combine the images corresponding to the partial regions in the N regions through the implementation mode 2, and then determine the first image corresponding to the remaining regions in the N regions through the implementation mode 1.
Optionally, some of the N regions may be regions with a larger number of corresponding elements; the image difference between the plurality of images, which can also be indicated for the plurality of elements, is large, and it is easy for the user to determine the region of the first image from the plurality of images; it is also possible for the user to randomly determine the region from the N regions. For example, in a case that the electronic device detects that the number of identifiers displayed by the electronic device is less than or equal to a first threshold and the number of elements corresponding to each identifier is less than or equal to a second threshold, the electronic device may update the identifier of the partial region from the first display mode to a third display mode, where the partial region is a region where the number of corresponding elements exceeds the third threshold, so that the user may sequentially determine the first images of the partial region.
In the embodiment of the invention, as the electronic device can generate the target picture by combining the implementation mode 1 and the implementation mode 2, a user can select the first images of the N areas more flexibly.
As shown in fig. 8, an embodiment of the present invention provides an electronic device 800, and the electronic device 800 may include a display module 801, a receiving module 802, and a processing module 803. The display module 801 may be configured to respectively display an identifier in each of N regions of a first picture, where each region corresponds to M images, and each image in the M images is an image in a different picture of the M pictures, where the identifier in one region is used to indicate that the M images corresponding to the one region are different, and display regions of the M images corresponding to the one region in different pictures all correspond to the one region, N is a positive integer, and M is an integer greater than 1. The receiving module 802 may be configured to receive a first input of the N identifiers displayed by the displaying module 801. The processing module 803 may be configured to generate, in response to the first input received by the receiving module 802, a target picture according to N first images and a second image, where one first image is one of different images corresponding to one region, and the second image is an image of another region of the first picture except the N regions.
Optionally, in this embodiment of the present invention, the first input may include N first sub-inputs and N second sub-inputs. The receiving module 802 may be specifically configured to receive an ith first sub-input of an identifier of an ith area. The display module 801 may be further configured to display at least two elements in response to the ith first sub-input received by the receiving module 802, where one element may be used to indicate one of different images corresponding to the ith area. The receiving module 802 may be specifically configured to receive an ith second sub-input of a first element of the at least two elements. The processing module 803 may be specifically configured to determine, in response to the ith second sub-input received by the receiving module 802, one image indicated by the first element as a first image corresponding to the ith area, and control the display module 801 to update the identifier of the ith area from the first display mode to the second display mode; and after the N first images are determined, generating a target picture according to the N first images and the second image.
Optionally, in the embodiment of the present invention, one element is a thumbnail of one image. The receiving module 802 may be configured to receive a second input to a second element of the at least two elements after the display module 801 displays the at least two elements. The display module 801 may be further configured to display an image indicated by the second element in response to the second input received by the receiving module 802.
Optionally, the receiving module 802 may be further configured to receive a third input to a third element of the at least two elements displayed by the display module 801 after the display module 801 displays the at least two elements. The processing module 803 may further be configured to generate a preview picture according to the one image and the third image indicated by the third element in response to the third input received by the receiving module 802, and display the preview picture. The third image may be an image of a region other than the ith region in the first picture.
Optionally, in this embodiment of the present invention, the processing module 803 may be specifically configured to, in response to a first input received by the receiving module 802, combine different images corresponding to the N regions to obtain S groups of images, perform image stitching on each group of images and a second image, and generate S pictures including a target picture, where each group of images includes N images corresponding to the N regions, and S is a positive integer greater than N; the receiving module 802 may be further configured to receive a fourth input to the target picture after the processing module 803 generates the target picture; the processing module 803 may further be configured to store the target picture and delete pictures other than the target picture from the S pictures in response to the fourth input received by the receiving module 802.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
The embodiment of the invention provides electronic equipment, which can display N identifiers for indicating that images in regions of M pictures have differences, so that a user can select one image from different images corresponding to each region through inputting the N identifiers, for example, the user can use the image which is considered by the user to have the best shooting effect as a first image of one region, and a target picture synthesized by the N first images and a second image is probably a picture which is most suitable for the requirements of the user, namely, the picture provided by the embodiment of the invention has a better display effect.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 9, the electronic device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 may be configured to control the display unit 106 to display a flag in each of the N regions of the first picture. Each region corresponds to M images, and each of the M images is an image in a different one of the M pictures. The identifier in one region is used for indicating that the M images corresponding to the one region are different, and the display regions of the M images corresponding to the one region in different pictures correspond to the one region. N is a positive integer, and M is an integer greater than 1.
The processor 110 may be further configured to control the user input unit 107 to receive a first input of the N identifiers displayed on the display unit 106; and in response to the first input, generating a target picture according to the N first images and the second image, one first image being one of the different images corresponding to one region, the second image being an image of the other region of the first picture except the N regions.
It is to be understood that, in the embodiment of the present invention, the display module 801 in the structural schematic diagram of the electronic device (for example, fig. 8) may be implemented by the display unit 106, the receiving module 802 may be implemented by the user input unit 107, and the processing module 803 may be implemented by the processor 110.
The embodiment of the invention provides electronic equipment, which can display N identifiers for indicating that images in regions of M pictures have differences, so that a user can select one image from different images corresponding to each region through inputting the N identifiers, for example, the user can use the image which is considered by the user to have the best shooting effect as a first image of one region, and a target picture synthesized by the N first images and a second image is probably a picture which is most suitable for the requirements of the user, namely, the picture provided by the embodiment of the invention has a better display effect.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 9, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, wherein the application processor mainly handles the operating system, the first user interface, the application program, and the like, and the modem processor mainly handles the wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the above-mentioned embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A picture generation method is applied to electronic equipment, and is characterized by comprising the following steps:
respectively displaying an identifier in each of N areas of a first picture, wherein each area corresponds to M images, each image in the M images is an image in different pictures in M pictures, the identifier in one area is used for indicating that the M images corresponding to the one area are different, the display areas of the M images corresponding to the one area in different pictures correspond to the one area, N is a positive integer, and M is an integer greater than 1;
receiving a first input of the N identifiers;
and responding to the first input, and generating a target picture according to N first images and a second image, wherein one first image is one of different images corresponding to one area, and the second image is the image of other areas except the N areas in the first picture.
2. The method of claim 1, wherein the first input comprises N first sub-inputs and N second sub-inputs;
the receiving a first input of the N identifications includes:
receiving an ith first sub-input of an identifier of an ith area, wherein i is a positive integer less than or equal to N;
the generating a target picture from the N first images and the second image in response to the first input includes:
displaying at least two elements in response to the ith first sub-input, one element indicating one of different images corresponding to the ith region;
the receiving a first input of the N identifications, further comprising:
receiving an ith second sub-input to a first element of the at least two elements;
the generating a target picture from the N first images and the second image in response to the first input further comprises:
in response to the ith second sub-input, determining one image indicated by the first element as a first image corresponding to the ith area, and updating the identifier of the ith area from the first display mode to the second display mode;
after the N first images are determined, the target picture is generated according to the N first images and the second image.
3. The method of claim 2, wherein one element is a thumbnail of one image;
after the displaying at least two elements, the method further comprises:
receiving a second input to a second element of the at least two elements;
in response to the second input, displaying an image indicated by the second element.
4. The method of claim 2, wherein after displaying the at least two elements, the method further comprises:
receiving a third input to a third element of the at least two elements;
and responding to the third input, generating a preview picture according to the image and a third image indicated by the third element, and displaying the preview picture, wherein the third image is an image of the other area except the ith area in the first picture.
5. The method of claim 1, wherein generating a target picture from the N first images and the second image in response to the first input comprises:
responding to the first input, combining different images corresponding to the N areas to obtain S groups of images, respectively carrying out image splicing on each group of images and a second image to generate S pictures including the target picture, wherein each group of images includes N images corresponding to the N areas, and S is a positive integer greater than N;
after the generating the target picture, the method further includes:
receiving a fourth input to the target picture;
and responding to the fourth input, storing the target picture, and deleting pictures except the target picture in the S pictures.
6. An electronic device, comprising a display module, a receiving module, and a processing module;
the display module is configured to respectively display an identifier in each of N regions of a first picture, each region corresponds to M images, each image in the M images is an image in a different picture of M pictures, where the identifier in one region is used to indicate that the M images corresponding to the one region are different, and display regions of the M images corresponding to the one region in different pictures all correspond to the one region, N is a positive integer, and M is an integer greater than 1;
the receiving module is used for receiving first input of the N identifications displayed by the display module;
the processing module is configured to generate a target picture according to the N first images and a second image in response to the first input received by the receiving module, where one first image is one of different images corresponding to one region, and the second image is an image of another region in the first picture except for the N regions.
7. The electronic device of claim 6, wherein the first input comprises N first sub-inputs and N second sub-inputs;
the receiving module is specifically configured to receive an ith first sub-input of an identifier of an ith area, where i is a positive integer less than or equal to N;
the display module is further configured to display at least two elements in response to the ith first sub-input received by the receiving module, wherein one element is used for indicating one image in different images corresponding to the ith area;
the receiving module is specifically configured to receive an ith second sub-input to a first element of the at least two elements;
the processing module is specifically configured to determine, in response to the ith second sub-input received by the receiving module, an image indicated by the first element as a first image corresponding to the ith area, and control the display module to update the identifier of the ith area from a first display mode to a second display mode; and after determining the N first images, generating the target picture according to the N first images and the second image.
8. The electronic device of claim 7, wherein an element is a thumbnail of an image;
the receiving module is used for receiving a second input of a second element of the at least two elements after the display module displays the at least two elements;
the display module is further configured to display an image indicated by the second element in response to the second input received by the receiving module.
9. The electronic device of claim 7,
the receiving module is further configured to receive a third input to a third element of the at least two elements displayed by the display module after the display module displays the at least two elements;
the processing module is further configured to generate a preview picture according to one image and a third image indicated by the third element in response to the third input received by the receiving module, and display the preview picture, where the third image is an image of another region in the first picture except the ith region.
10. The electronic device of claim 6,
the processing module is specifically configured to combine, in response to the first input received by the receiving module, different images corresponding to the N regions to obtain S groups of images, perform image stitching on each group of images and a second image, and generate S pictures including the target picture, where each group of images includes N images corresponding to the N regions, and S is a positive integer greater than N;
the receiving module is further configured to receive a fourth input to the target picture after the processing module generates the target picture;
the processing module is further configured to store the target picture and delete pictures other than the target picture from the S pictures in response to the fourth input received by the receiving module.
11. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the picture generation method as claimed in any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the picture generation method as claimed in any one of claims 1 to 5.
CN201911370283.8A 2019-12-26 2019-12-26 Picture generation method and electronic equipment Active CN111124231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911370283.8A CN111124231B (en) 2019-12-26 2019-12-26 Picture generation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911370283.8A CN111124231B (en) 2019-12-26 2019-12-26 Picture generation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111124231A true CN111124231A (en) 2020-05-08
CN111124231B CN111124231B (en) 2021-02-12

Family

ID=70503326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911370283.8A Active CN111124231B (en) 2019-12-26 2019-12-26 Picture generation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111124231B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625166A (en) * 2020-05-21 2020-09-04 维沃移动通信有限公司 Picture display method and device
CN113014799A (en) * 2021-01-28 2021-06-22 维沃移动通信有限公司 Image display method and device and electronic equipment
WO2021249436A1 (en) * 2020-06-11 2021-12-16 维沃移动通信有限公司 Picture processing method and apparatus, and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009170A (en) * 2012-12-28 2015-10-28 日本电气株式会社 Object identification device, method, and storage medium
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
CN106803899A (en) * 2015-11-26 2017-06-06 华为技术有限公司 The method and apparatus for merging image
CN106815809A (en) * 2017-03-31 2017-06-09 联想(北京)有限公司 A kind of image processing method and device
CN107682580A (en) * 2016-08-01 2018-02-09 日本冲信息株式会社 Information processor and method
CN108509904A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN109460177A (en) * 2018-09-27 2019-03-12 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109978015A (en) * 2019-03-06 2019-07-05 重庆金山医疗器械有限公司 A kind of image processing method, device and endoscopic system
CN110292774A (en) * 2019-06-28 2019-10-01 广州华多网络科技有限公司 One kind is found fault picture materials processing method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009170A (en) * 2012-12-28 2015-10-28 日本电气株式会社 Object identification device, method, and storage medium
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
CN106803899A (en) * 2015-11-26 2017-06-06 华为技术有限公司 The method and apparatus for merging image
CN107682580A (en) * 2016-08-01 2018-02-09 日本冲信息株式会社 Information processor and method
CN106815809A (en) * 2017-03-31 2017-06-09 联想(北京)有限公司 A kind of image processing method and device
CN108509904A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN109460177A (en) * 2018-09-27 2019-03-12 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109978015A (en) * 2019-03-06 2019-07-05 重庆金山医疗器械有限公司 A kind of image processing method, device and endoscopic system
CN110292774A (en) * 2019-06-28 2019-10-01 广州华多网络科技有限公司 One kind is found fault picture materials processing method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625166A (en) * 2020-05-21 2020-09-04 维沃移动通信有限公司 Picture display method and device
CN111625166B (en) * 2020-05-21 2021-11-30 维沃移动通信有限公司 Picture display method and device
WO2021249436A1 (en) * 2020-06-11 2021-12-16 维沃移动通信有限公司 Picture processing method and apparatus, and electronic device
CN113014799A (en) * 2021-01-28 2021-06-22 维沃移动通信有限公司 Image display method and device and electronic equipment

Also Published As

Publication number Publication date
CN111124231B (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN110995923B (en) Screen projection control method and electronic equipment
CN110891144B (en) Image display method and electronic equipment
CN110908558B (en) Image display method and electronic equipment
CN108446058B (en) Mobile terminal operation method and mobile terminal
CN111142991A (en) Application function page display method and electronic equipment
CN111142723B (en) Icon moving method and electronic equipment
CN110062105B (en) Interface display method and terminal equipment
CN110752981B (en) Information control method and electronic equipment
CN109828731B (en) Searching method and terminal equipment
CN111273993B (en) Icon arrangement method and electronic equipment
CN110874147B (en) Display method and electronic equipment
CN111026299A (en) Information sharing method and electronic equipment
CN110703972B (en) File control method and electronic equipment
CN111124231B (en) Picture generation method and electronic equipment
CN111064848B (en) Picture display method and electronic equipment
CN111163224B (en) Voice message playing method and electronic equipment
CN111190517B (en) Split screen display method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN110944113B (en) Object display method and electronic equipment
CN110209324B (en) Display method and terminal equipment
CN109067975B (en) Contact person information management method and terminal equipment
CN111399715B (en) Interface display method and electronic equipment
CN110647506B (en) Picture deleting method and terminal equipment
CN111104533A (en) Picture processing method and electronic equipment
CN110750200A (en) Screenshot picture processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant