WO2021249436A1 - 图片处理方法、装置及电子设备 - Google Patents

图片处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2021249436A1
WO2021249436A1 PCT/CN2021/099182 CN2021099182W WO2021249436A1 WO 2021249436 A1 WO2021249436 A1 WO 2021249436A1 CN 2021099182 W CN2021099182 W CN 2021099182W WO 2021249436 A1 WO2021249436 A1 WO 2021249436A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
target
pictures
input
interface
Prior art date
Application number
PCT/CN2021/099182
Other languages
English (en)
French (fr)
Inventor
韩桂敏
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to JP2022576090A priority Critical patent/JP2023529219A/ja
Priority to EP21822401.2A priority patent/EP4160522A4/en
Publication of WO2021249436A1 publication Critical patent/WO2021249436A1/zh
Priority to US18/078,887 priority patent/US20230106434A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application belongs to the field of communication technology, and specifically relates to an image processing method, device, and electronic equipment.
  • the user when the user needs to splice multiple pictures to obtain a combined picture, the user can first use the electronic device to cut each of the multiple pictures separately to obtain a suitable size or shape. Then, the cropped multiple pictures are spliced together to obtain a combined picture.
  • the purpose of the embodiments of the present application is to provide a picture processing method, device, and electronic equipment, which can solve the problem of low efficiency of the electronic equipment in obtaining a combined picture.
  • an embodiment of the present application provides an image processing method.
  • the method includes: receiving a first input from a user when a first interface is displayed.
  • the first interface includes N target identifiers, and each target The identifiers respectively indicate a picture.
  • the first input is the user's input for M target identifiers among the N target identifiers. Both N and M are integers greater than 1, and M is less than or equal to N; in response to the first input,
  • the first interface is updated to the second interface, and the second interface includes M pictures indicated by M target identifiers; receiving a second input from the user to the M pictures; in response to the second input, according to each of the M pictures
  • the size of the picture, the M pictures are synthesized to obtain the target synthesized picture.
  • an embodiment of the present application provides an image processing device, which includes: a receiving module, an update module, and a processing module.
  • the receiving module is configured to receive the first input of the user when the first interface is displayed.
  • the first interface includes N target identifiers, and each target identifier indicates a picture respectively, and the first input is the user
  • N and M are both integers greater than 1, and M is less than or equal to N.
  • the update module is configured to update the first interface to the second interface in response to the first input received by the receiving module, and the second interface includes M pictures indicated by M target identifiers.
  • the receiving module is also used to receive the second input of the user on the M pictures.
  • the processing module is configured to, in response to the second input received by the receiving module, perform synthesis processing on the M pictures according to the size of each of the M pictures to obtain the target synthesized picture.
  • an embodiment of the present application provides an electronic device that includes a processor, a memory, and a program or instruction that is stored on the memory and can run on the processor.
  • the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used to run a program or an instruction to implement the chip as in the first aspect The method described.
  • the user when the electronic device displays the first interface, the user can input M target identifiers among the N target identifiers displayed in the first interface to trigger the electronic device to update the first interface to the second interface. Interface, so that the user can perform a second input on the M pictures indicated by the M target identifiers displayed in the second interface, so that the electronic device can perform a second input on the M pictures according to the size of each of the M pictures Synthesize processing to obtain the target synthetic picture.
  • the user can input the logo corresponding to the multiple pictures in the interface displaying the logo corresponding to the multiple pictures on the electronic device, so that the electronic device can Display another interface including these pictures, so that the user can input these pictures in the other interface, so that the electronic device can adjust the display position and display size of each picture in these pictures according to the user’s input.
  • these pictures are synthesized to obtain the target synthesized picture, without the user having to edit each of these pictures separately through the electronic device, to obtain the size required for the picture synthesis Pictures, and then synthesize the edited pictures to obtain a synthesized picture, thus saving the user's operation, thereby improving the efficiency of the electronic device processing to obtain a synthesized picture.
  • FIG. 1 is one of the schematic diagrams of a picture processing method provided by an embodiment of the present application
  • FIG. 2 is one of the schematic diagrams of an example of an interface of a mobile phone provided by an embodiment of the present application
  • FIG. 3 is a second schematic diagram of an example of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 4 is a second schematic diagram of a picture processing method provided by an embodiment of the present application.
  • FIG. 5 is the third schematic diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 6 is a third example of a schematic diagram of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 7 is a fourth schematic diagram of an example of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 8 is a fifth example schematic diagram of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 9 is a sixth schematic diagram of an example of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 10 is a seventh example of a schematic diagram of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 11 is a fourth schematic diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 12 is a fifth schematic diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 13 is the eighth example of a schematic diagram of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 14 is a ninth example of a schematic diagram of an interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 15 is one of the schematic structural diagrams of a picture processing apparatus provided by an embodiment of the present application.
  • FIG. 16 is the second structural diagram of a picture processing apparatus provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • the user can display the interface corresponding to the photo album application on the electronic device, among the multiple picture thumbnails displayed in the interface Select and input part of the picture thumbnails, so that the electronic device can update the interface corresponding to the photo album application to the picture editing interface, and display the picture corresponding to the picture thumbnail selected by the user in the picture editing interface, so that the user can Input the displayed pictures in the picture editing interface (for example, adjust the position or size of the pictures, etc.), so that the electronic device can perform operations on these pictures according to the user’s input and the overlap size between each of the two pictures.
  • Cutting and splicing processing that is, cutting the pictures with overlapping areas first, and then splicing the displayed pictures
  • the user first uses the electronic device to cut each of the multiple pictures to be spliced separately to obtain the pictures of the size required for the splicing of the pictures, and then cut the multiple pictures after the splicing process. Only when the pictures are spliced can a synthesized picture be obtained. Therefore, the user's operation can be saved, and the efficiency of the electronic device in processing a combined picture can be improved.
  • FIG. 1 shows a flowchart of an image processing method provided by an embodiment of the present application, and the method may be applied to an electronic device.
  • the image processing method provided by the embodiment of the present application may include the following steps 201 to 204.
  • Step 201 When the first interface is displayed, the electronic device receives the first input of the user.
  • the first interface includes N target identifiers, and each target identifier indicates a picture respectively.
  • the first input is the user's input of M target identifiers out of the N target identifiers, N and M Both are integers greater than 1, and M is less than or equal to N.
  • the user can perform the first input on part of the image identifiers among the multiple image identifiers displayed in the first interface, so that the electronic device can display the second interface, and display the part of the image identifiers in the second interface Indicates certain pictures, so that the user can input these pictures, so that the electronic device can synthesize these pictures, so as to obtain synthesized pictures corresponding to these pictures.
  • the above-mentioned first interface may be a picture thumbnail display interface in the photo album application, and the user can trigger the electronic device to run the photo album application, thereby displaying the first interface.
  • the above-mentioned target identifier may be any of the following: the thumbnail of the picture, the name of the picture, the number of the picture, and so on.
  • the sizes (for example, areas) of the N target identifiers displayed in the first interface may be the same size or different sizes.
  • the above-mentioned first input may be a drag input of a certain target identifier by the user; or, the above-mentioned first input may be a user's selection input of certain target identifiers, and the input of these target identifiers Drag input of a target ID in.
  • the above-mentioned first input is an input of the user dragging one target identifier to another target identifier.
  • the user when the user’s first input is the input of two target identifiers, the user can directly drag one target identifier to another target identifier (that is, drag one target identifier from the display position of the target identifier to the other The display position where the target identifier is located), so that the one target identifier and the other target identifier have an overlapping area. There is no need to select the two target identifiers first, and then drag input.
  • overlapping area can be understood as a part or all area of one logo occluding another logo, or one picture occluding part or all of another picture.
  • the electronic device is a mobile phone as an example for description.
  • the mobile phone displays the picture thumbnail display interface 10 of the photo album application
  • the user needs to synthesize the picture 11 and the picture 12 in the picture thumbnail display interface 10
  • the user can drag the picture 12 to make the picture 11 and the picture 12 have an overlapping area 20 (shown in shade in the figure).
  • the above-mentioned first input includes a first sub-input and a second sub-input; wherein, the first sub-input is a user's selection input of M target identifiers, The second sub-input is an input for the user to drag one target identifier among the M target identifiers to another target identifier among the M target identifiers; or, the first sub-input is the user's input to M-1 of the N target identifiers.
  • the second sub-input is the input of the user dragging one of the M-1 target identifiers to another target identifier, or the second sub-input is the user dragging another target identifier Input to one of the M-1 target identifiers; the other target identifier is an identifier other than the M-1 target identifiers among the N target identifiers.
  • the user needs to select and input these target identifiers first to trigger the electronic device to determine the pictures corresponding to these target identifiers as the pictures to be synthesized. Then the user drags one of the target identifiers to the other target identifier, so that there is an overlapping area between the one target identifier and the other target identifier.
  • the user can directly determine M target identifiers through the first sub-input; or the user can first determine M-1 target identifiers through the first sub-input, and then determine another target identifier through the second sub-input.
  • M target identifiers the specific steps for determining M target identifiers will not be described in detail in this application.
  • Step 202 In response to the first input, the electronic device updates the first interface to the second interface.
  • the foregoing second interface includes M pictures indicated by M target identifiers.
  • the above-mentioned second interface may be a picture editing interface, and the second interface may also include multiple controls, and the multiple controls may be at least one of the following: filter controls, adjustment controls, Graffiti control and label control, etc.
  • the above-mentioned second interface may further include a determination control, and the determination control is used to trigger the electronic device to perform synthesis processing on the picture to obtain a synthesized picture.
  • the electronic device may arrange and display M pictures indicated by M target identifiers in the second interface.
  • step 202 may be specifically implemented by the following step 202a.
  • Step 202a In response to the first input, the electronic device updates the first interface to the second interface when the overlap size of one target identifier and the other target identifier is greater than or equal to a first preset threshold.
  • the above-mentioned overlap size may be understood as the overlap area of the logo or the ratio of the overlap area of the logo to the area of a certain logo.
  • the electronic device when the overlap size is understood as the overlap area of the mark, the electronic device can determine the overlap area of one target mark and another target mark, so that the difference between the one target mark and the other target mark In the case that the identified overlap area is greater than or equal to the first preset area (ie, the first preset threshold), the electronic device may update the first interface to the second interface.
  • the first preset area ie, the first preset threshold
  • the electronic device can determine that the overlap area of one target tag and the other target tag occupies the The ratio of the area of a target identifier (or another target identifier), so that the ratio of the overlapping area of the one target identifier and the other target identifier to the area of the one target identifier (or another target identifier) is greater than or equal to the first
  • a preset ratio ie, the first preset threshold
  • the electronic device can update the first interface to the second interface.
  • the electronic device can determine whether the overlap size of one target identifier and another target identifier meets a preset condition, so that when the overlap size of one target identifier and the other target identifier meets the preset condition, the The first interface is updated to the second interface. Therefore, it can prevent the electronic device from updating the first interface to the second interface when the user touches by mistake, thereby improving the accuracy of the electronic device in responding to user input.
  • the above-mentioned second interface includes a first preset area and a second preset area; the first preset area is used to display a top picture, and there is an overlap area between any picture and the top picture In the case of, the pinned picture covers any picture in the overlapping area.
  • the image processing method provided by the embodiment of the present application further includes the following step 301, and the above step 202 is specific This can be achieved by the following step 202b.
  • Step 301 In response to the first input, the electronic device determines a picture indicated by another target identifier as a top picture.
  • the electronic device may determine the picture indicated by one target identifier or the picture indicated by another target identifier as the top picture according to the first input of the user.
  • the user may first determine the picture indicated by one target identifier as the top picture through an input. Picture to switch the top picture.
  • the user can double-click to input a picture indicated by a target identifier, so that the electronic device can switch the layer priority of the picture indicated by one target identifier and the picture indicated by another target identifier ( That is, the picture indicated by one target identifier is determined to be the top picture, and the picture indicated by the other target identifier is cancelled.
  • the electronic device may display a picture indicated by the target identifier in the first preset area, and display the picture indicated by the target identifier in the first preset area. 2. The picture indicated by another target mark is displayed in the preset area.
  • the electronic device can mark another target identifier (that is, identifier B) The indicated picture is determined to be the top picture; or, in the case where the first input is: the user drags another target mark (namely mark B) to one target mark (namely mark A), the electronic device may mark a target mark ( That is, the picture indicated by the mark A) is determined to be the top picture.
  • Step 202b The electronic device displays a picture indicated by another target identifier in a first preset area, and displays a picture indicated by another target identifier in a second preset area.
  • the above-mentioned other target identifiers are identifiers other than another target identifier among the M target identifiers.
  • the electronic device may display another picture indicated by the target identifier in the first preset area, and display a picture indicated by the target identifier in the second preset area .
  • the user can trigger the electronic device to display a picture indicated by the target identifier in the first preset area and display another picture indicated by the target identifier in the second preset area through input.
  • the mobile phone when the user makes an overlapping area 20 between the picture 11 and the picture 12 through the first input, the mobile phone can determine that the area of the overlapping area 20 occupies the area of the picture 11 (or picture 12) Therefore, when the ratio of the area of the overlapping area 20 to the area of the picture 11 (or picture 12) is greater than or equal to the second preset ratio (for example, 60%), as shown in FIG. 6, the mobile phone can display the picture thumbnail
  • the display interface 10 is updated to a picture editing interface 21.
  • the picture editing interface 21 includes a first preset area 22, a second preset area 23, and a determination control 24.
  • the mobile phone can display the picture 11 in the first preset area 22.
  • the electronic device may display another picture indicated by the target identifier in the first preset area, and display the M target identifiers in the second preset area.
  • the user can trigger the electronic device to display any one of the M-1 pictures indicated by an identifier other than another target identifier among the M target identifiers in the first preset area through input.
  • a picture, and another picture indicated by the target identifier is displayed in the second preset area.
  • the mobile phone can update the picture thumbnail display interface 10 to the picture editing interface 21, and the picture editing interface 21 includes the first A preset area 22 and a second preset area 23.
  • the mobile phone can display the picture 11 in the first preset area 22 and the picture 12, the picture 13, and the picture 14 in the second preset area 23.
  • the above-mentioned second preset area includes a first sub-area and a second sub-area.
  • the "display of pictures indicated by other target identifiers in the second preset area" in the above step 202b can be specifically implemented by the following step 202b1.
  • Step 202b1 The electronic device displays another picture indicated by the target identifier in the first preset area, and displays a picture indicated by the target identifier in the first sub-area, and displays other target identifiers in the second sub-area except for one target identifier.
  • the outside logo indicates the picture.
  • the second preset area may include the first sub-area and the second sub-area.
  • the first sub-region is used to display one of the M-1 pictures in sequence
  • the second sub-region is used to display M-2 of the M-1 pictures except for the one displayed in the first sub-region. Pictures.
  • one picture displayed in the first sub-region and M-2 pictures displayed in the second sub-region are all pictures in the M-1 pictures.
  • the mobile phone can update the picture thumbnail display interface 10 to the picture editing interface 21, in which the picture editing interface 21 includes the first A preset area 22 and a second preset area 23, wherein the second preset area 23 includes a first sub-area 231 and a second sub-area 232.
  • the mobile phone can display the picture 11 in the first preset area 22, display the picture 12 in the first sub-area 231, and display the picture 13 and the picture 14 in the second sub-area 232.
  • the electronic device may determine the picture indicated by another target identifier as the top picture, display the picture in the first preset area, and display the other M target identifiers in the second preset area. Pictures indicated by the target identifier other than the target identifier, so that the user can input these pictures, so that the electronic device can synthesize these pictures.
  • Step 203 The electronic device receives the second input of the user on the M pictures.
  • the user can trigger the electronic device to sequentially adjust the display position and display size of each of the M pictures through input, thereby combining the M pictures.
  • Step 204 In response to the second input, the electronic device performs synthesis processing on the M pictures according to the size of each of the M pictures to obtain a target synthesized picture.
  • the above-mentioned synthesis processing may include at least one of the following: cropping processing of the picture, adjustment of the picture size, and splicing processing of the picture, and so on.
  • the combination of the picture indicated by the one target identifier and the picture indicated by another target identifier may include any one of the following: a picture (for example, a picture indicated by a target identifier) and another The distance between pictures (such as the picture indicated by another target identifier) is greater than 0 (that is, the two pictures are not in contact), and the distance between one picture and the other is equal to 0 (that is, the edge lines of the two pictures overlap, But there is no overlap area between two pictures), there is an overlap area between one picture and the other picture.
  • the user can drag the picture 12 to drag the picture 12 to the position where the overlapping area with the picture 11 is located, and then the user can input the determination control 24 , To trigger the mobile phone to synthesize picture 11 and picture 12 to obtain the target synthesized picture.
  • the user when M is greater than 2, the user can drag the M-1 pictures displayed in the second preset area to the one indicated by another target identifier displayed in the first preset area. The location of the picture, so that the M-1 picture and the picture indicated by the other target identifier are combined. Then the user can input the determination control to trigger the electronic device to synthesize the M-1 picture and the picture indicated by the other target identifier, so as to obtain the target synthesized picture.
  • the user can sequentially drag picture 12, picture 13, and picture 14 to drag picture 12, picture 13, and picture 14 to The location of the picture 11, and then the user can input the determination control 24 to trigger the mobile phone to synthesize the picture 11, the picture 12, the picture 13, and the picture 14, thereby obtaining the target composite picture.
  • the user when M is greater than 2, and the second preset area includes the first sub-area and the second sub-area, the user can select M-1 pictures displayed in the first sub-area Drag a picture in the first preset area to the position of the picture indicated by the other target identifier, so that the picture and the picture indicated by the other target identifier are combined, and then the user can determine the control Input to trigger the electronic device to synthesize the picture and the picture indicated by the other target identifier, so as to obtain a synthesized picture. Then the user can input another picture of the M-2 pictures displayed in the second sub-area, so that the electronic device displays the other picture in the first sub-area, and the user can then display the other picture in the first sub-area.
  • the user can drag the picture 12 to drag the picture 12 to the position where the picture 11 is, and then the user can The determination control 24 is input to trigger the mobile phone to perform synthesis processing on the picture 11 and the picture 12, thereby obtaining a synthesized picture 15 (that is, a combined picture of the picture 11 and the picture 12).
  • the user can drag the picture 13 again to drag the picture 13 to the position where the synthesized picture 15 is located, and then the user can input the confirmation control 24 again to trigger the mobile phone
  • the picture 13 and the synthesized picture 15 are synthesized to obtain a synthesized picture 16 (that is, a combined picture of picture 11, picture 12, and picture 13).
  • the user can drag the picture 14 again to drag the picture 14 to the position where the synthesized picture 16 is located, and then the user can input the confirmation control 24 again to trigger the mobile phone
  • the picture 14 and the synthesized picture 16 are synthesized to obtain the target synthesized picture (that is, the combined picture of the picture 11, the picture 12, the picture 13 and the picture 14).
  • the embodiment of the application provides an image processing method.
  • the user can input M target identifiers out of the N target identifiers displayed in the first interface to trigger the electronic device to display the first interface
  • the second interface is updated, so that the user can perform a second input on the M pictures indicated by the M target identifiers displayed in the second interface, so that the electronic device can perform a second input on the M pictures according to the size of each of the M pictures.
  • the pictures are synthesized to obtain the target synthesized picture.
  • the user can input the logo corresponding to the multiple pictures in the interface displaying the logo corresponding to the multiple pictures on the electronic device, so that the electronic device can Display another interface including these pictures, so that the user can input these pictures in the other interface, so that the electronic device can adjust the display position and display size of each picture in these pictures according to the user’s input.
  • these pictures are synthesized to obtain the target synthesized picture, without the user having to edit each of these pictures separately through the electronic device, to obtain the size required for the picture synthesis Pictures, and then synthesize the edited pictures to obtain a synthesized picture, thus saving the user's operation, thereby improving the efficiency of the electronic device processing to obtain a synthesized picture.
  • the above-mentioned second input is a drag input of the user on M-1 pictures of the M pictures in sequence.
  • the above-mentioned step 204 can be specifically implemented by the following step 204a.
  • Step 204a In response to the second input, the electronic device responds to the second input, and if there is an overlapping area in each of the M pictures, it performs an operation on the overlapping part corresponding to each of the two pictures according to the overlapping size of each of the two pictures. Cropping processing, and performing composite processing on the cropped picture and other pictures to obtain the target composite picture.
  • the above-mentioned other pictures are pictures other than the pictures after cropping among the M pictures.
  • the overlap size greater than the second preset threshold can be understood as: the area of the overlap area of the two pictures is greater than the second preset area, or the area of the overlap area of the two pictures occupies two The ratio of the area of any picture in the picture is greater than the third preset ratio.
  • the electronic device may perform crop processing on the occluded part of any two pictures that is occluded according to the overlap size of the overlap area between any two pictures.
  • the electronic device can crop some or all of the M pictures.
  • the specific cropping of the pictures depends on the actual situation, and this application is not limited here. .
  • the electronic device may perform splicing processing on the cropped pictures and the pictures that have not been cropped among the M pictures to obtain the target composite picture.
  • the electronic device can simultaneously perform cropping processing on the pictures that need to be cropped among the M pictures, and simultaneously stitch the cropped pictures and the pictures that have not been cropped among the M pictures. Processing to obtain the target composite picture, thereby improving the efficiency of the electronic device to process and obtain a composite picture.
  • the above-mentioned second input includes M-1 sub-inputs, and each sub-input is a drag input by the user on one of the M-1 pictures, and the M-1 pictures are M The picture in the picture.
  • the above-mentioned step 204 may be specifically implemented by the following step 204b.
  • Step 204b In response to the second input, the electronic device performs synthesis processing on the first picture and the first target picture according to the overlap size of the first picture and the first target picture to obtain the first synthesized picture; and according to the second picture The overlapping size of the picture and the first synthesized picture, the second picture and the first synthesized picture are synthesized to obtain the second synthesized picture, and so on, until the M-1th picture is synthesized to obtain the target synthesized picture.
  • the above-mentioned first picture, second picture, and M-1th picture are all pictures in M-1 pictures
  • the above-mentioned first target picture is M pictures except M -1 pictures other than pictures.
  • the foregoing composite processing is performed on the first picture and the first target picture according to the overlap size of the first picture and the first target picture to obtain the first composite picture, including: If there is an overlap area between the first picture and the first target picture, the first picture will be cropped according to the overlap size of the first picture and the first target picture, and the cropped first picture and the first picture will be cropped.
  • the target picture is synthesized to obtain the first synthesized picture; if there is no overlapping area between the first picture and the first target picture, the electronic device synthesizes the first picture and the first target picture to obtain the first synthesized picture.
  • the above-mentioned composite processing is performed on the second picture and the first composite picture according to the overlap size of the second picture and the first composite picture to obtain the second composite picture, including: If there is an overlapping area between the second picture and the first composite picture, the second picture will be cropped according to the overlap size of the second picture and the first composite picture, and the cropped second picture will be cropped with the first
  • the synthesized picture is synthesized to obtain a second synthesized picture; if there is no overlapping area between the second picture and the first synthesized picture, the electronic device synthesizes the second picture with the first synthesized picture to obtain the second synthesized picture.
  • the user can input the composite control to The electronic device is allowed to perform synthesis processing on the two pictures (that is, the first picture and the first target picture) to obtain a synthesized picture (for example, the first synthesized picture).
  • the user can input the synthesis control, so that the electronic device can synthesize the two pictures to obtain a synthesized picture , Until the electronic device performs synthesis processing on the last picture among the M pictures to obtain the target synthesized picture.
  • the electronic device may sequentially synthesize one of the M pictures with the first target picture or a synthesized picture that has been obtained according to the input of the user, and the target synthesis is obtained through multiple synthesizing processes. Therefore, after each synthesis process, the user can view the display effect of the synthesized picture, so that the user can flexibly select the picture to be synthesized.
  • the execution subject may be the image processing device, or the control module in the image processing device for executing the loading image processing method.
  • the image processing method provided by the embodiment of the present application is described by taking the image processing apparatus executing the processing method of loading a picture as an example.
  • step 204b “combining the first picture and the first target picture according to the overlapping size of the first picture and the first target picture” can specifically be implemented as follows: Step 204b1 or step 204b2 is implemented.
  • Step 204b1 If there is no overlapping area between the first picture and the first target picture, the electronic device crops the first picture or the first target picture according to the width value of the first picture or the width value of the first target picture. Cut processing, and synthesize the cropped picture with the uncut picture.
  • the first picture is adjacent to the first target picture
  • the edge line of the first picture is the edge of the first target picture. The lines coincide.
  • width value of the first picture and the width value of the first target picture are width values in the same direction, and the width value of the first picture and the width value of the first target picture are determined by the user The width value in one direction.
  • the electronic device may, according to the width value of the first picture and the picture with the smaller width value in the first target picture, Cropping is performed on the first picture and the picture with the larger width value in the first target picture, so that the cropped picture and the uncropped picture can be combined to obtain a rectangular picture.
  • the first picture 30 and the first target picture 31 do not have an overlapping area, and the first picture 30 and the first target picture 31 If the width value of a picture 30 is greater than the width value of the first target picture 31, the mobile phone can crop the first picture 30 to crop the area 32 and the area 33 in the first picture 30, as shown in the figure (B) in 13 is a picture obtained by combining the first picture 30 and the first target picture 31 after the cropping process.
  • Step 204b2 If there is an overlapping area between the first picture and the first target picture, the electronic device performs cropping processing on at least one of the first picture and the first target picture according to the width of the overlapped area, and based on the cropping The cut processed pictures are combined.
  • the width value of the first picture and the width value of the first target picture are both greater than the width value of the overlapping area, then Crop the area outside the width value of the overlap area in the first picture, the area outside the width value of the overlap area in the first target picture, and the bottom picture in the first picture and the first target picture (that is, except for the top Picture outside the picture) overlap area; or, if one of the width value of the first picture and the width value of the first target picture is greater than the width value of the overlap area, the other width value is equal to the width value of the overlap area ,
  • the cropping width value is greater than the area outside the width value of the overlap area in the picture corresponding to the width value of the overlap area, and the bottom picture in the first picture and the first target picture (that is, pictures other than the top picture) Overlap area.
  • the first picture 30 and the first target picture 31 have an overlapping area 34 (Fig.
  • the first picture 30 is the top picture, and the width value of the first picture 30 and the width value of the first target picture 31 are both greater than the width value of the overlapping area 34, then the mobile phone can check the first picture
  • the picture 30 and the first target picture 31 are cropped to cut the area 35 in the first picture 30, the overlapping area 34 in the first picture 30, and the area 36 in the first target picture 31, such as (B) in FIG. 14 shows a picture obtained after the first picture 30 and the first target picture 31 are synthesized after the cropping process.
  • FIG. 15 shows a schematic diagram of a possible structure of a picture processing apparatus involved in an embodiment of the present application.
  • the image processing apparatus 70 may include: a receiving module 71, an updating module 72, and a processing module 73.
  • the receiving module 71 is configured to receive a user's first input when the first interface is displayed.
  • the first interface includes N target identifiers, and each target identifier indicates a picture.
  • the first input is When the user inputs M target identifiers out of N target identifiers, N and M are both integers greater than 1, and M is less than or equal to N.
  • the update module 72 is configured to update the first interface to the second interface in response to the first input received by the receiving module 71, and the second interface includes M pictures indicated by M target identifiers.
  • the receiving module 71 is also used to receive the second input of the user on the M pictures.
  • the processing module 73 is configured to, in response to the second input received by the receiving module 71, perform synthesis processing on the M pictures according to the size of each of the M pictures to obtain a target synthesized picture.
  • the first input is an input of the user dragging a target identifier to another target identifier; in the case of M greater than 2, the first input includes the first input.
  • the input of another target identifier; or, the first sub-input is the user's selection input for M-1 target identifiers among the N target identifiers, and the second sub-input is the user inputting one of the M-1 target identifiers
  • the input of dragging to another target identifier, or the second sub-input is the input of the user dragging another target identifier to one of the M-1 target identifiers; the other target identifier is among the N target identifiers Logos other than M-1 target logos.
  • the update module 72 is specifically configured to update the first interface to the second interface when the overlap size of one target identifier and the other target identifier is greater than or equal to the first preset threshold.
  • the second interface includes a first preset area and a second preset area; the first preset area is used to display the top picture, in the case that any picture and the top picture have an overlapping area , Built-in top picture to cover any picture in the overlapping area.
  • the image processing apparatus 70 provided in the embodiment of the present application may further include: a determining module 74.
  • the determining module 74 is configured to determine the picture indicated by another target identifier as the top picture before the update module 72 updates the first interface to the second interface.
  • the update module 72 is specifically configured to display a picture indicated by another target identifier in the first preset area, and display pictures indicated by other target identifiers in the second preset area.
  • the other target identifiers are M target identifiers except for another target Logo other than logo.
  • the second preset area includes a first sub-area and a second sub-area.
  • the update module 72 is specifically configured to display a picture indicated by a target identifier in the first sub-area, and display pictures indicated by markers other than one target identifier among other target identifiers in the second sub-area.
  • the second input is a drag input of the user on M-1 pictures of the M pictures in sequence.
  • the processing module 73 is specifically configured to perform cutting processing on the overlapping part corresponding to one picture in each two pictures according to the overlap size of each two pictures if there is an overlapping area in each two pictures in the M pictures,
  • the cropped picture is synthesized with other pictures to obtain a target synthesized picture, and the other pictures are pictures other than the cropped pictures among the M pictures.
  • the second input includes M-1 sub-inputs, and each sub-input is a drag input by the user on one of the M-1 pictures, and the M-1 pictures are M pictures In the picture.
  • the processing module 73 is specifically configured to perform synthesis processing on the first picture and the first target picture according to the overlap size of the first picture and the first target picture to obtain the first synthesized picture; and according to the second picture and the first target picture The overlap size of the synthesized picture is synthesized, the second picture and the first synthesized picture are synthesized to obtain the second synthesized picture, and so on, until the M-1th picture is synthesized to obtain the target synthesized picture.
  • the first picture, the second picture, and the M-1th picture are all pictures in the M-1 picture
  • the first target picture is the pictures in the M pictures except for the M-1 picture.
  • the processing module 73 is specifically configured to: if there is no overlapping area between the first picture and the first target picture, perform the calculation according to the width value of the first picture or the width value of the first target picture The first picture or the first target picture is cropped, and the cropped picture and the uncropped picture are combined; or, if the first picture and the first target picture have overlapping areas, then According to the width value of the overlapping area, a cropping process is performed on at least one of the first picture and the first target picture, and a compositing process is performed based on the cropped picture.
  • the picture processing device provided in the embodiment of the present application can implement each process implemented by the picture processing device in the foregoing method embodiment. To avoid repetition, the detailed description will not be repeated here.
  • the picture processing device in the embodiment of the present application may be a device, or a component, integrated circuit, or chip in the picture processing device.
  • the device can be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (NAS), personal computers (PC), televisions (television, TV), teller machines or self-service machines, etc., this application
  • NAS network attached storage
  • PC personal computers
  • TV televisions
  • teller machines or self-service machines etc.
  • the image processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present application.
  • the embodiment of the present application provides a picture processing device, because when the user needs to synthesize multiple pictures through the picture processing device, the user can display the picture processing device in the interface containing the logo corresponding to the multiple pictures.
  • the identification corresponding to a picture is input, so that the picture processing device can display another interface including these pictures, so that the user can input these pictures in the other interface, so that the picture processing device can be adjusted according to the user's input.
  • the display position and display size of each picture in these pictures, and according to the size of each picture in these pictures, these pictures are synthesized to obtain the target synthesized picture, without the user having to first use the picture processing device to perform each of these pictures.
  • the pictures are edited separately to obtain the pictures of the size required for the picture synthesis, and then the edited pictures are synthesized to obtain the synthesized picture, thus saving the user's operation and improving the processing of the picture processing device The efficiency of getting a composite picture.
  • an embodiment of the present application further provides an electronic device, including a processor, a memory, and a program or instruction that is stored in the memory and can run on the processor.
  • an electronic device including a processor, a memory, and a program or instruction that is stored in the memory and can run on the processor.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 17 is a schematic diagram of the hardware structure of an electronic device that implements an embodiment of the present application.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, etc. part.
  • the electronic device 100 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power management through the power management system. Consumption management and other functions.
  • the structure of the electronic device shown in FIG. 17 does not constitute a limitation on the electronic device.
  • the electronic device may include more or fewer components than those shown in the figure, or some components may be combined, or different component arrangements, which will not be repeated here. .
  • the user input unit 107 is configured to receive the first input of the user when the first interface is displayed.
  • the first interface includes N target identifiers, and each target identifier indicates a picture.
  • the first input It is the user's input to M target identifiers out of N target identifiers, N and M are both integers greater than 1, and M is less than or equal to N.
  • the display unit 106 is configured to update the first interface to the second interface in response to the first input, and the second interface includes M pictures indicated by M target identifiers.
  • the user input unit 107 is also used to receive a second input from the user to the M pictures.
  • the processor 110 is configured to, in response to the second input, perform synthesis processing on the M pictures according to the size of each of the M pictures to obtain a target synthesized picture.
  • the embodiment of the present application provides an electronic device, because when the user needs to synthesize multiple pictures through the electronic device, the user can display the interface containing the identification corresponding to the multiple pictures corresponding to the multiple pictures in the electronic device. , So that the electronic device can display another interface that includes these pictures, so that the user can input these pictures in the other interface, so that the electronic device can adjust each of these pictures according to the user’s input.
  • the display position and display size of the pictures, and according to the size of each of these pictures, these pictures are synthesized to obtain the target synthesized picture, without the user needing to edit each of these pictures separately through the electronic device. , Get the picture of the size required for picture synthesis, and then synthesize the edited pictures to get the synthesized picture, thus saving the user's operation, thereby improving the efficiency of the electronic device to process a synthesized picture .
  • the display unit 106 is further configured to update the first interface to the second interface when the overlap size of one target identifier and the other target identifier is greater than or equal to a first preset threshold.
  • the processor 110 is further configured to determine the picture indicated by another target identifier as the top picture before updating the first interface to the second interface.
  • the display unit 106 is further configured to display a picture indicated by another target identifier in the first preset area, and display pictures indicated by other target identifiers in the second preset area.
  • the other target identifiers are the M target identifiers except for another target.
  • logo other than logo are the M target identifiers except for another target.
  • the display unit 106 is further configured to display a picture indicated by a target identifier in the first sub-area, and display pictures indicated by markers other than one target identifier among other target identifiers in the second sub-area.
  • the processor 110 is further configured to, if there is an overlapping area in each of the M pictures, perform a cropping process on the overlapping part corresponding to one of the two pictures according to the overlapping size of each of the two pictures, The cropped picture is synthesized with other pictures to obtain the target synthesized picture, and the other pictures are the pictures excluding the cropped pictures among the M pictures.
  • the processor 110 is further configured to perform synthesis processing on the first picture and the first target picture according to the overlap size of the first picture and the first target picture to obtain the first synthesized picture; and according to the second picture and the first target picture
  • the overlap size of the synthesized picture is synthesized, the second picture and the first synthesized picture are synthesized to obtain the second synthesized picture, and so on, until the M-1th picture is synthesized to obtain the target synthesized picture.
  • the processor 110 is further configured to: if there is no overlapping area between the first picture and the first target picture, perform a comparison of the first picture or the first target picture according to the width value of the first picture or the width value of the first target picture Perform cropping processing, and synthesize the cropped picture with the un-cropped picture; or, if there is an overlapping area between the first picture and the first target picture, according to the width of the overlapped area, the At least one of a picture and the first target picture is subjected to cropping processing, and synthesis processing is performed based on the cropped picture.
  • the embodiment of the present application also provides a readable storage medium with a program or instruction stored on the readable storage medium.
  • the program or instruction is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and the same can be achieved. In order to avoid repetition, I won’t repeat them here.
  • the processor is the processor in the electronic device described in the foregoing embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks, or optical disks.
  • An embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used to run a program or an instruction to implement the above-mentioned image processing method embodiment
  • the chip includes a processor and a communication interface
  • the communication interface is coupled with the processor
  • the processor is used to run a program or an instruction to implement the above-mentioned image processing method embodiment
  • chips mentioned in the embodiments of the present application may also be referred to as system-level chips, system-on-chips, system-on-chips, or system-on-chips.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本申请公开了一种图片处理方法、装置及电子设备,属于通信技术领域。该方法包括:在显示第一界面的情况下,接收用户的第一输入,该第一界面中包括N个目标标识,每个目标标识分别指示一张图片,该第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N;响应于第一输入,将第一界面更新为第二界面,该第二界面中包括M个目标标识指示的M张图片;接收用户对M张图片的第二输入;响应于第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。

Description

图片处理方法、装置及电子设备
相关申请的交叉引用
本申请主张在2020年06月11日在中国提交的中国专利申请号No.202010531536.1的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种图片处理方法、装置及电子设备。
背景技术
通常,在用户需求将多张图片进行拼接,以得到一张组合图片时,用户可以先通过电子设备对该多张图片中的每张图片分别进行裁切操作,以得到尺寸或形状合适的多张图片,然后再对裁切后的多张图片进行拼接操作,从而得到一张组合图片。
然而,上述方法中,由于用户需要先对每张图片进行裁切操作,然后再对裁切后的多张图片进行拼接操作,因此用户的操作繁琐且耗时,从而电子设备得到一张组合图片的效率较低。
发明内容
本申请实施例的目的是提供一种图片处理方法、装置及电子设备,能够解决电子设备得到一张组合图片的效率较低的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图片处理方法,该方法包括:在显示第一界面的情况下,接收用户的第一输入,该第一界面中包括N个目标标识,每个目标标识分别指示一张图片,该第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N;响应于第一输入,将第一界面更新为第二界面,该第二界面中包括M个目标标识指示的M张图片;接收用户对M张图片的第二输入;响应于第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。
第二方面,本申请实施例提供了一种图片处理装置,该装置包括:接收模块、更新模块和处理模块。其中,接收模块,用于在显示第一界面的情况下,接收用户的第一输入,该第一界面中包括N个目标标识,每个目标标识分别指示一张图片,该第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N。更新模块,用于响应于接收模块接收的第一输入,将第一界面更新为第二界面,该第二界面中包括M个目标标识指示的M张图片。接收模块,还用于接收用户对M张图片的第二输入。处理模块,用于响应于接收模块接收的第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
在本申请实施例中,在电子设备显示第一界面时,用户可以对第一界面中显示的N个目标标识中的M个目标标识进行输入,以触发电子设备将第一界面更新为第二界面,从而用户可以对第二界面中显示的M个目标标识指示的M张图片进行第二输入,以使得电子设备可以根据该M张图片中的每张图片的尺寸,对该M张图片进行合成处理,得到目标合成图片。由于在用户需求通过电子设备对多张图片进行合成处理时,用户可以在电子设备显示包含该多张图片对应的标识的界面中,对该多张图片对应的标识进行输入,以使得电子设备可以显示包括有这些图片的另一个界面,从而用户可以在该另一个界面中对这些图片进行输入,以使得电子设备可以根据用户的输入调整这些图片中每张图片的显示位置和显示尺寸,并根据这些图片中每张图片的尺寸,对这些图片进行合成处理,得到目标合成图片,而无需用户先通过电子设备对这些图片中的每张图片分别进行编辑处理,得到图片合成时所需要的尺寸的图片,然后再对编辑处理后的这些图片进行合成处理才能够得到合成图片,因此节省了用户的操作,从而提高了电子设备处理得到一张合成图片的效率。
附图说明
图1是本申请实施例提供的一种图片处理方法的示意图之一;
图2是本申请实施例提供的一种手机的界面的实例示意图之一;
图3是本申请实施例提供的一种手机的界面的实例示意图之二;
图4是本申请实施例提供的一种图片处理方法的示意图之二;
图5是本申请实施例提供的一种图片处理方法的示意图之三;
图6是本申请实施例提供的一种手机的界面的实例示意图之三;
图7是本申请实施例提供的一种手机的界面的实例示意图之四;
图8是本申请实施例提供的一种手机的界面的实例示意图之五;
图9是本申请实施例提供的一种手机的界面的实例示意图之六;
图10是本申请实施例提供的一种手机的界面的实例示意图之七;
图11是本申请实施例提供的一种图片处理方法的示意图之四;
图12是本申请实施例提供的一种图片处理方法的示意图之五;
图13是本申请实施例提供的一种手机的界面的实例示意图之八;
图14是本申请实施例提供的一种手机的界面的实例示意图之九;
图15是本申请实施例提供的一种图片处理装置的结构示意图之一;
图16是本申请实施例提供的一种图片处理装置的结构示意图之二;
图17是本申请实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请 中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图片处理方法进行详细地说明。
本申请实施例中,若用户需求通过电子设备对多张图片进行裁切、拼接处理,则用户可以在电子设备显示相册应用程序对应的界面时,对该界面中显示的多个图片缩略图中的部分图片缩略图进行选择输入,以使得电子设备可以将该相册应用程序对应的界面更新为图片编辑界面,并在该图片编辑界面中显示用户选择的图片缩略图对应的图片,从而用户可以在该图片编辑界面中对显示的这些图片进行输入(例如调整图片的位置或尺寸等),以使得电子设备可以根据用户的输入,以及这些图片中每两张图片之间的重叠尺寸对这些图片进行裁切、拼接处理(即先对有重叠区域的图片进行剪裁处理,再将显示的这些图片进行拼接),从而得到一张合成图片。而无需按照传统的方法,用户先通过电子设备对待拼接的多张图片中的每张图片分别进行裁切处理,得到图片拼接时所需要的尺寸的图片,然后再对裁切处理后的多张图片进行拼接才能够得到合成图片。因此可以节省用户的操作,从而可以提高电子设备处理得到一张组合图片的效率。
本申请实施例提供一种图片处理方法,图1示出了本申请实施例提供的一种图片处理方法的流程图,该方法可以应用于电子设备。如图1所示,本申请实施例提供的图片处理方法可以包括下述的步骤201至步骤204。
步骤201、在显示第一界面的情况下,电子设备接收用户的第一输入。
本申请实施例中,上述第一界面中包括N个目标标识,每个目标标识分别指示一张图片,上述第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N。
本申请实施例中,用户可以对第一界面中显示的多个图片标识中的部分图片标识进行第一输入,以使得电子设备可以显示第二界面,并在第二界面中显示该部分图片标识指示的某些图片,从而用户可以对这些图片进行输入,以使得电子设备可以对这些图片进行合成处理,从而得到这些图片对应的合成图片。
可选地,本申请实施例中,上述第一界面可以为相册应用程序中的图片缩略图展示界面,用户可以触发电子设备运行相册应用程序,从而显示第一界面。
可选地,本申请实施例中,上述目标标识可以为以下任一项:图片的缩略图、图片的名称和图片的编号等。
需要说明的是,在目标标识为图片的缩略图的情况下,第一界面中显示的N个目标标识的尺寸(例如面积)可以为相同的尺寸,也可以为不同的尺寸。
可选地,本申请实施例中,上述第一输入可以为用户对某个目标标识的拖动输入;或者,上述第一输入可以为用户对某些目标标识的选择输入,并对这些目标标识中的某个目 标标识的拖动输入。
可选地,本申请实施例中,在M=2的情况下,上述第一输入为用户将一个目标标识拖动至另一个目标标识的输入。
可以理解,在用户的第一输入为对2个目标标识的输入时,用户可以直接将一个目标标识拖动至另一个目标标识(即将一个目标标识从该目标标识的显示位置拖动至另一个目标标识所在的显示位置),从而使得该一个目标标识和该另一个目标标识存在重叠区域。而不需要先对这两个目标标识进行选择,再进行拖动输入。
需要说明的是,上述重叠区域可以理解为,一个标识遮挡另一个标识的部分区域或全部区域,或者,一张图片遮挡另一张图片的部分区域或全部区域。
示例性的,以电子设备为手机为例进行说明。如图2中的(A)所示,在手机显示相册应用程序的图片缩略图展示界面10的情况下,若用户需求对该图片缩略图展示界面10中的图片11和图片12进行合成时,如图2中的(B)所示,用户可以对图片12进行拖动输入,以使得图片11和图片12存在重叠区域20(图中以阴影示意)。
可选地,本申请实施例中,在M大于2的情况下,上述第一输入包括第一子输入和第二子输入;其中,第一子输入为用户对M个目标标识的选择输入,第二子输入为用户将M个目标标识中的一个目标标识拖动至M个目标标识中的另一个目标标识的输入;或者,第一子输入为用户对N个目标标识中的M-1个目标标识的选择输入,第二子输入为用户将M-1个目标标识中的一个目标标识拖动至另一个目标标识的输入,或者,第二子输入为用户将另一个目标标识拖动至M-1个目标标识中的一个目标标识的输入;该另一个目标标识为N个目标标识中除M-1个目标标识之外的标识。
可以理解,在用户的第一输入为针对2个以上的目标标识的输入时,用户需要先对这些目标标识进行选择输入,以触发电子设备将这些目标标识对应的图片确定为待合成的图片,然后用户再将这些目标标识中的一个目标标识拖动至另一个目标标识,从而使得该一个目标标识和该另一个目标标识存在重叠区域。
需要说明的是,用户可以直接通过第一子输入确定M个目标标识;或者用户可以先通过第一子输入确定M-1个目标标识,然后再通过第二子输入确定另一个目标标识,以确定M个目标标识,关于确定M个目标标识的具体步骤本申请不做赘述。
示例性的,结合图2中的(A),如图3中的(A)所示,在手机显示相册应用程序的图片缩略图展示界面10的情况下,若用户需求对该图片缩略图展示界面10中的图片11、图片12、图片13和图片14进行合成时,用户可以先对图片11、图片12、图片13和图片14进行选择输入,以使得手机可以标记这些图片,然后如图3中的(B)所示,用户可以对图片12进行拖动输入,以使得图片11和图片12存在重叠区域。
步骤202、电子设备响应于第一输入,将第一界面更新为第二界面。
本申请实施例中,上述第二界面中包括M个目标标识指示的M张图片。
可选地,本申请实施例中,上述第二界面可以为图片编辑界面,该第二界面中还可以包括多个控件,该多个控件可以为以下至少一项:滤镜控件、调节控件、涂鸦控件和标注控件等。
可选地,本申请实施例中,上述第二界面中还可以包括确定控件,该确定控件用于触发电子设备对图片进行合成处理,得到合成图片。
可选地,本申请实施例中,电子设备可以在第二界面中排列显示M个目标标识指示的M张图片。
可选地,本申请实施例中,结合图1,如图4所示,上述步骤202具体可以通过下述的步骤202a实现。
步骤202a、电子设备响应于第一输入,在一个目标标识与另一个目标标识的重叠尺寸大于或等于第一预设阈值的情况下,将第一界面更新为第二界面。
可选地,本申请实施例中,上述重叠尺寸可以理解为标识的重叠面积或标识的重叠面积占某个标识的面积的比值。
可选地,本申请实施例中,在重叠尺寸理解为标识的重叠面积的情况下,电子设备可以判断一个目标标识与另一个目标标识的重叠面积,从而在该一个目标标识与该另一个目标标识的重叠面积大于或等于第一预设面积(即第一预设阈值)的情况下,电子设备可以将第一界面更新为第二界面。
可选地,本申请实施例中,在重叠尺寸理解为标识的重叠面积占该某个标识的总面积的比例的情况下,电子设备可以判断一个目标标识与另一个目标标识的重叠面积占该一个目标标识(或另一个目标标识)的面积的比值,从而在该一个目标标识与该另一个目标标识的重叠面积占该一个目标标识(或另一个目标标识)的面积的比值大于或等于第一预设比值(即第一预设阈值)的情况下,电子设备可以将第一界面更新为第二界面。
本申请实施例中,电子设备可以通过判断一个目标标识与另一个目标标识的重叠尺寸是否满足预设条件,从而在一个目标标识与另一个目标标识的重叠尺寸满足预设条件的情况下,将第一界面更新为第二界面,因此可以防止在用户误触的情况下,电子设备将第一界面更新为第二界面,从而可以提高电子设备响应于用户输入的准确性。
可选地,本申请实施例中,上述第二界面中包括第一预设区域和第二预设区域;该第一预设区域用于显示置顶图片,在任意图片与该置顶图片存在重叠区域的情况下,在重叠区域内该置顶图片覆盖任意图片。结合图1,如图5所示,在上述步骤202中的“将第一界面更新为第二界面”之前,本申请实施例提供的图片处理方法还包括下述步骤301,并且上述步骤202具体可以通过下述的步骤202b实现。
步骤301、电子设备响应于第一输入,将另一个目标标识指示的图片确定为置顶图片。
可选地,本申请实施例中,电子设备可以根据用户的第一输入,将一个目标标识指示的图片或另一个目标标识指示的图片确定为置顶图片。
可选地,本申请实施例中,在电子设备响应于第一输入,将另一个目标标识指示的图片确定为置顶图片之后,用户可以先通过一个输入,将一个目标标识指示的图片确定为置顶图片,以切换置顶图片。
可选地,本申请实施例中,用户可以通过对一个目标标识指示的图片进行双击输入,以使得电子设备可以切换一个目标标识指示的图片和另一个目标标识指示的图片的图层优先级(即将一个目标标识指示的图片确定为置顶图片,另一个目标标识指示的图片取消置顶)。
可选地,本申请实施例中,在电子设备将一个目标标识指示的图片确定为置顶图片,切换置顶图片之后,电子设备可以在第一预设区域显示一个目标标识指示的图片,并在第二预设区域显示另一个目标标识指示的图片。
需要说明的是,在第一输入为:用户将一个目标标识(例如标识A)拖动至另一个目标标识(例如标识B)的情况下,电子设备可以将另一个目标标识(即标识B)指示的图片确定为置顶图片;或者,在第一输入为:用户将另一个目标标识(即标识B)拖动至一个目标标识(即标识A)的情况下,电子设备可以将一个目标标识(即标识A)指示的图片确定为置顶图片。
步骤202b、电子设备在第一预设区域显示另一个目标标识指示的图片,并在第二预设区域显示其他目标标识指示的图片。
本申请实施例中,上述其他目标标识为M个目标标识中除另一个目标标识之外的标识。
可选地,本申请实施例中,在M=2的情况下,电子设备可以在第一预设区域显示另一个目标标识指示的图片,并在第二预设区域显示一个目标标识指示的图片。
可选地,本申请实施例中,用户可以通过输入,触发电子设备在第一预设区域显示一个目标标识指示的图片,并在第二预设区域显示另一个目标标识指示的图片。
示例性的,结合图2中的(B),在用户通过第一输入使得图片11和图片12存在重叠区域20时,手机可以判断该重叠区域20的面积占图片11(或图片12)的面积的比值,从而在该重叠区域20的面积占图片11(或图片12)的面积的比值大于或等于第二预设比值(例如60%)时,如图6所示,手机可以将图片缩略图展示界面10更新为图片编辑界面21,在该图片编辑界面21中包括有第一预设区域22、第二预设区域23和确定控件24,手机可以在第一预设区域22中显示图片11,并在第二预设区域23中显示图片12。
可选地,本申请实施例中,在M大于2的情况下,电子设备可以在第一预设区域显示另一个目标标识指示的图片,并在第二预设区域显示M个目标标识中除另一个目标标识之外的标识指示的M-1张图片。
可选地,本申请实施例中,用户可以通过输入,触发电子设备在第一预设区域显示M个目标标识中除另一个目标标识之外的标识指示的M-1张图片中的任意一张图片,并在第二预设区域显示另一个目标标识指示的图片。
示例性的,结合图3中的(B),如图7中的(A)所示,手机可以将图片缩略图展示界面10更新为图片编辑界面21,在该图片编辑界面21中包括有第一预设区域22和第二预设区域23,手机可以在第一预设区域22中显示图片11,并在第二预设区域23中显示图片12、图片13和图片14。
可选地,本申请实施例中,上述第二预设区域包括第一子区域和第二子区域。上述步骤202b中的“在第二预设区域显示其他目标标识指示的图片”具体可以通过下述的步骤202b1实现。
步骤202b1、电子设备在第一预设区域显示另一个目标标识指示的图片,并在第一子区域显示一个目标标识指示的图片,以及在第二子区域显示其他目标标识中除一个目标标识之外的标识指示的图片。
可选地,本申请实施例中,在M大于2的情况下,第二预设区域可以包括第一子区域和第二子区域。其中,第一子区域用于依次显示M-1张图片中的一张图片,第二子区域用于显示M-1张图片中除第一子区域显示的一张图片之外的M-2张图片。
需要说明的是,第一子区域中显示的一张图片和第二子区域中显示的M-2张图片是M-1张图片中的全部图片。
示例性的,结合图3中的(B),如图7中的(B)所示,手机可以将图片缩略图展示界面10更新为图片编辑界面21,在该图片编辑界面21中包括有第一预设区域22和第二预设区域23,其中,第二预设区域23中包括有第一子区域231和第二子区域232。手机可以在第一预设区域22中显示图片11,并在第一子区域231中显示图片12,以及在第二子区域232中显示图片13和图片14。
本申请实施例中,电子设备可以将另一个目标标识指示的图片确定为置顶图片,并在第一预设区域中显示该图片,以及在第二预设区域显示M个目标标识中除另一个目标标识之外的目标标识指示的图片,从而用户可以对这些图片进行输入,以使得电子设备对这些图片进行合成处理。
步骤203、电子设备接收用户对M张图片的第二输入。
可选地,本申请实施例中,用户可以通过输入,触发电子设备依次调整M张图片中的每张图片的显示位置和显示尺寸,从而将M张图片进行组合。
步骤204、电子设备响应于第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。
可选地,本申请实施例中,上述合成处理可以包括以下至少一项:对图片进行裁切处理、对图片尺寸进行调整和对图片进行拼接处理等。
可选地,本申请实施例中,在M=2的情况下,用户可以将第二预设区域显示的一个目标标识指示的图片拖动至第一预设区域显示的另一个目标标识指示的图片所在位置,以使得该一个目标标识指示的图片和该另一个目标标识指示的图片进行组合。然后用户可以对确定控件进行输入,以触发电子设备对该一个目标标识指示的图片和该另一个目标标识指示的图片进行合成处理,从而得到目标合成图片。
可选地,本申请实施例中,上述一个目标标识指示的图片和另一个目标标识指示的图片进行组合可以包括以下任一种:一张图片(例如一个目标标识指示的图片)和另一张图片(例如另一个目标标识指示的图片)之间的距离大于0(即两张图片未接触)、一张图片和另一张图片之间的距离等于0(即两张图片的边缘线重合,但两张图片之间没有重叠区域)、一张图片和另一张图片之间存在重叠区域。
示例性的,结合图6,如图8所示,用户可以对图片12进行拖动输入,以将图片12拖动至与图片11存在重合区域所在的位置,然后用户可以对确定控件24进行输入,以触发手机对图片11和图片12进行合成处理,从而得到目标合成图片。
可选地,本申请实施例中,在M大于2的情况下,用户可以将第二预设区域显示的M-1张图片依次拖动至第一预设区域显示的另一个目标标识指示的图片所在位置,以使得该M-1张图片和该另一个目标标识指示的图片进行组合。然后用户可以对确定控件进行输入,以触发电子设备对该M-1张图片和该另一个目标标识指示的图片进行合成处理,从而得到目标合成图片。
示例性的,结合图7中的(A),如图9所示,用户可以依次对图片12、图片13和图片14进行拖动输入,以分别将图片12、图片13和图片14拖动至图片11所在的位置,然后用户可以对确定控件24进行输入,以触发手机对图片11、图片12、图片13和图片14进行合成处理,从而得到目标合成图片。
可选地,本申请实施例中,在M大于2,且第二预设区域中包括第一子区域和第二子 区域的情况下,用户可以将第一子区域显示的M-1张图片中的一张图片拖动至第一预设区域显示的另一个目标标识指示的图片所在位置,以使得该一张图片和该另一个目标标识指示的图片进行组合,然后用户可以对确定控件进行输入,以触发电子设备对该一张图片和该另一个目标标识指示的图片进行合成处理,从而得到一张合成图片。然后用户可以对第二子区域显示的M-2张图片中的另一张图片进行输入,以使得电子设备在第一子区域显示该另一张图片,进而用户可以将第一子区域显示的该另一张图片拖动至上述的一张合成图片所在位置,以使得该另一张图片和该一张合成图片进行组合,然后用户可以对确定控件进行输入,以触发电子设备对该另一张图片和该一张合成图片进行合成处理,从而得到另一张合成图片。依次类推,直至用户对第二子区域中显示的M-2张图片均进行输入,从而触发电子设备对M张图片进行合成处理,并得到目标合成图片。
示例性的,结合图7中的(B),如图10中的(A)所示,用户可以对图片12进行拖动输入,以将图片12拖动至图片11所在的位置,然后用户可以对确定控件24进行输入,以触发手机对图片11和图片12进行合成处理,从而得到一张合成图片15(即图片11和图片12的组合图片)。如图10中的(B)所示,用户可以再对图片13进行拖动输入,以将图片13拖动至合成图片15所在的位置,然后用户可以再次对确定控件24进行输入,以触发手机对图片13和合成图片15进行合成处理,从而得到一张合成图片16(即图片11、图片12和图片13的组合图片)。如图10中的(C)所示,用户可以再对图片14进行拖动输入,以将图片14拖动至合成图片16所在的位置,然后用户可以再次对确定控件24进行输入,以触发手机对图片14和合成图片16进行合成处理,从而得到目标合成图片(即图片11、图片12、图片13和图片14的组合图片)。
本申请实施例提供一种图片处理方法,在电子设备显示第一界面时,用户可以对第一界面中显示的N个目标标识中的M个目标标识进行输入,以触发电子设备将第一界面更新为第二界面,从而用户可以对第二界面中显示的M个目标标识指示的M张图片进行第二输入,以使得电子设备可以根据该M张图片中每张图片的尺寸,对该M张图片进行合成处理,得到目标合成图片。由于在用户需求通过电子设备对多张图片进行合成处理时,用户可以在电子设备显示包含该多张图片对应的标识的界面中,对该多张图片对应的标识进行输入,以使得电子设备可以显示包括有这些图片的另一个界面,从而用户可以在该另一个界面中对这些图片进行输入,以使得电子设备可以根据用户的输入调整这些图片中每张图片的显示位置和显示尺寸,并根据这些图片中每张图片的尺寸,对这些图片进行合成处理,得到目标合成图片,而无需用户先通过电子设备对这些图片中的每张图片分别进行编辑处理,得到图片合成时所需要的尺寸的图片,然后再对编辑处理后的这些图片进行合成处理才能够得到合成图片,因此节省了用户的操作,从而提高了电子设备处理得到一张合成图片的效率。
可选地,本申请实施例中,上述第二输入为用户依次对M张图片中的M-1张图片的拖动输入。结合图1,如图11所示,上述步骤204具体可以通过下述的步骤204a实现。
步骤204a、电子设备响应于第二输入,若M张图片中的每两张图片存在重叠区域,则根据每两张图片的重叠尺寸,对每两张图片中的一张图片对应的重叠部分进行裁切处理,并对裁切处理后的图片与其他图片进行合成处理,得到目标合成图片。
可选地,本申请实施例中,上述其他图片为M张图片中除裁切处理后的图片之外的图 片。
可选地,本申请实施例中,重叠尺寸大于第二预设阈值可以理解为:两张图片的重叠区域的面积大于第二预设面积,或者,两张图片的重叠区域的面积占两张图片中任意一张图片的面积的比值大于第三预设比值。
可选地,本申请实施例中,电子设备可以根据任意两张图片之间重叠区域的重叠尺寸,对该任意两张图片中被遮挡的一张图片的被遮挡部分做裁切处理。
需要说明的是,根据用户的第二输入,电子设备可以对M张图片中的部分图片或全部图片进行裁切处理,图片的具体裁切情况视实际情况而定,本申请在此不做限定。
可选地,本申请实施例中,电子设备可以将裁切处理后的图片和M张图片中未做裁切处理的图片进行拼接处理,得到目标合成图片。
本申请实施例中,电子设备可以同时对M张图片中需要做裁切处理的图片进行裁切处理,并将裁切处理后的图片和M张图片中未做裁切处理的图片同时进行拼接处理,得到目标合成图片,从而可以提高电子设备处理得到一张合成图片的效率。
可选地,本申请实施例中,上述第二输入包括M-1个子输入,每个子输入为用户对M-1张图片中的一张图片的拖动输入,M-1张图片为M张图片中的图片。结合图1,如图12所示,上述步骤204具体可以通过下述的步骤204b实现。
步骤204b、电子设备响应于第二输入,根据第一张图片与第一目标图片的重叠尺寸,对第一张图片和第一目标图片进行合成处理,得到第一合成图片;并根据第二张图片与第一合成图片的重叠尺寸,对第二张图片和第一合成图片进行合成处理,得到第二合成图片,依次类推,直至对第M-1张图片进行合成处理,得到目标合成图片。
可选地,本申请实施例中,上述第一张图片、第二张图片和第M-1张图片均为M-1张图片中的图片,上述第一目标图片为M张图片中除M-1张图片之外的图片。
可选地,本申请实施例中,上述根据第一张图片与第一目标图片的重叠尺寸,对第一张图片和第一目标图片进行合成处理,得到第一合成图片,包括:若第一张图片与第一目标图片存在重叠区域,则根据第一张图片与第一目标图片的重叠尺寸,对第一张图片进行裁切处理,并将裁切处理后的第一张图片与第一目标图片进行合成处理,得到第一合成图片;若第一张图片与第一目标图片不存在重叠区域,则电子设备将第一张图片与第一目标图片进行合成处理,得到第一合成图片。
可选地,本申请实施例中,上述根据第二张图片与第一合成图片的重叠尺寸,对第二张图片和第一合成图片进行合成处理,得到第二合成图片,包括:若第二张图片与第一合成图片存在重叠区域,则根据第二张图片与第一合成图片的重叠尺寸,对第二张图片进行裁切处理,并将裁切处理后的第二张图片与第一合成图片进行合成处理,得到第二合成图片;若第二张图片与第一合成图片不存在重叠区域,则电子设备将第二张图片与第一合成图片进行合成处理,得到第二合成图片。
以此类推,直至对第M-1张图片进行合成处理,得到目标合成图片。
可选地,本申请实施例中,在用户拖动一张图片(例如第一张图片)至另一张图片(例如第一目标图片)所在位置处后,用户可以对合成控件进行输入,以使得电子设备对这两张图片(即第一张图片和第一目标图片)进行合成处理,得到一张合成图片(例如第一合成图片)。
需要说明的是,用户每次拖动一张图片至另一张图片所在位置处后,用户均可以对合成控件进行输入,以使得电子设备对这两张图片进行合成处理,得到一张合成图片,直至电子设备对M张图片中的最后一张图片进行合成处理,得到目标合成图片。
本申请实施例中,电子设备可以根据用户的输入,依次将M张图片中的一张图片与第一目标图片或已得到的一张合成图片进行合成处理,进过多次合成处理得到目标合成图片,因此在每次合成处理之后,用户均可以查看到合成处理后的图片的显示效果,从而用户可以灵活的选择需要合成的图片。
需要说明的是,本申请实施例提供的图片处理方法,执行主体可以为图片处理装置,或者该图片处理装置中的用于执行加载图片处理方法的控制模块。本申请实施例中以图片处理装置执行加载图片处理方法为例,说明本申请实施例提供的图片处理方法。
可选地,本申请实施例中,上述步骤204b中的“根据第一张图片与第一目标图片的重叠尺寸,对第一张图片和第一目标图片进行合成处理”具体可以通过下述的步骤204b1或步骤204b2实现。
步骤204b1、若第一张图片与第一目标图片不存在重叠区域,则电子设备根据第一张图片的宽度值或第一目标图片的宽度值,对第一张图片或第一目标图片进行裁切处理,并对裁切处理后的图片与未裁切处理的图片进行合成处理。
需要说明的是,上述第一张图片与第一目标图片不存在重叠区域可以理解为:第一张图片与第一目标图片相邻,并且第一张图片的边缘线与第一目标图片的边缘线重合。
需要说明的是,上述第一张图片的宽度值和第一目标图片的宽度值为同一方向上的宽度值,并且上述第一张图片的宽度值和第一目标图片的宽度值为用户确定的一个方向上的宽度值。
可选地,本申请实施例中,在第一张图片与第一目标图片不存在重叠区域时,电子设备可以根据第一张图片和第一目标图片中宽度值较小的图片的宽度值,对第一张图片和第一目标图片中宽度值较大的图片进行裁切处理,以使得裁切处理后的图片与未裁切处理的图片进行合成处理后可以得到矩形图片。
示例性的,结合图6,如图13中的(A)所示,为本申请实施例提供的一种图片裁剪示意图,第一张图片30与第一目标图片31不存在重叠区域,且第一张图片30的宽度值大于第一目标图片31的宽度值,则手机可以对第一张图片30进行裁切处理,以将第一张图片30中的区域32和区域33裁切,如图13中的(B)所示,为裁切处理后将第一张图片30与第一目标图片31进行合成处理后得到的图片。
步骤204b2、若第一张图片与第一目标图片存在重叠区域,则电子设备根据重叠区域的宽度值,对第一张图片和第一目标图片中的至少一者进行裁切处理,并基于裁切处理后的图片进行合成处理。
可选地,本申请实施例中,在第一张图片与第一目标图片存在重叠区域时,若第一张图片的宽度值和第一目标图片的宽度值均大于重叠区域的宽度值,则裁切第一张图片中重叠区域的宽度值之外的区域、第一目标图片中重叠区域的宽度值之外的区域、以及第一张图片与第一目标图片中置底图片(即除置顶图片之外的图片)的重叠区域;或者,若第一张图片的宽度值和第一目标图片的宽度值中的一个宽度值大于重叠区域的宽度值,另一个宽度值等于重叠区域的宽度值,则裁切宽度值大于重叠区域的宽度值对应的图片中重叠区 域的宽度值之外的区域、以及第一张图片与第一目标图片中置底图片(即除置顶图片之外的图片)的重叠区域。
示例性的,结合图6,如图14中的(A)所示,为本申请实施例提供的另一种图片裁剪示意图,第一张图片30与第一目标图片31存在重叠区域34(图中通过黑色填充示意),第一张图片30为置顶图片,且第一张图片30的宽度值与第一目标图片31的宽度值均大于重叠区域34的宽度值,则手机可以对第一张图片30和第一目标图片31进行裁切处理,以将第一张图片30中的区域35、第一张图片30中的重叠区域34、以及第一目标图片31中的区域36裁切,如图14中的(B)所示,为裁切处理后将第一张图片30与第一目标图片31进行合成处理后得到的图片。
图15示出了本申请实施例中涉及的图片处理装置的一种可能的结构示意图。如图15所示,图片处理装置70可以包括:接收模块71、更新模块72和处理模块73。
其中,接收模块71,用于在显示第一界面的情况下,接收用户的第一输入,该第一界面中包括N个目标标识,每个目标标识分别指示一张图片,该第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N。更新模块72,用于响应于接收模块71接收的第一输入,将第一界面更新为第二界面,该第二界面中包括M个目标标识指示的M张图片。接收模块71,还用于接收用户对M张图片的第二输入。处理模块73,用于响应于接收模块71接收的第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。
在一种可能的实现方式中,在M=2的情况下,第一输入为用户将一个目标标识拖动至另一个目标标识的输入;在M大于2的情况下,第一输入包括第一子输入和第二子输入;其中,第一子输入为用户对M个目标标识的选择输入,第二子输入为用户将M个目标标识中的一个目标标识拖动至M个目标标识中的另一个目标标识的输入;或者,第一子输入为用户对N个目标标识中的M-1个目标标识的选择输入,第二子输入为用户将M-1个目标标识中的一个目标标识拖动至另一个目标标识的输入,或者,第二子输入为用户将另一个目标标识拖动至M-1个目标标识中的一个目标标识的输入;另一个目标标识为N个目标标识中除M-1个目标标识之外的标识。
在一种可能的实现方式中,更新模块72,具体用于在一个目标标识与另一个目标标识的重叠尺寸大于或等于第一预设阈值的情况下,将第一界面更新为第二界面。
在一种可能的实现方式中,第二界面中包括第一预设区域和第二预设区域;该第一预设区域用于显示置顶图片,在任意图片与置顶图片存在重叠区域的情况下,在重叠区域内置顶图片覆盖任意图片。结合图15,如图16所示,本申请实施例提供的图片处理装置70还可以包括:确定模块74。其中,确定模块74,用于在更新模块72将第一界面更新为第二界面之前,将另一个目标标识指示的图片确定为置顶图片。更新模块72,具体用于在第一预设区域显示另一个目标标识指示的图片,并在第二预设区域显示其他目标标识指示的图片,其他目标标识为M个目标标识中除另一个目标标识之外的标识。
在一种可能的实现方式中,第二预设区域包括第一子区域和第二子区域。更新模块72,具体用于在第一子区域显示一个目标标识指示的图片,并在第二子区域显示其他目标标识中除一个目标标识之外的标识指示的图片。
在一种可能的实现方式中,第二输入为用户依次对M张图片中的M-1张图片的拖动 输入。处理模块73,具体用于若M张图片中的每两张图片存在重叠区域,则根据每两张图片的重叠尺寸,对每两张图片中的一张图片对应的重叠部分进行裁切处理,并对裁切处理后的图片与其他图片进行合成处理,得到目标合成图片,该其他图片为M张图片中除裁切处理后的图片之外的图片。
在一种可能的实现方式中,第二输入包括M-1个子输入,每个子输入为用户对M-1张图片中的一张图片的拖动输入,该M-1张图片为M张图片中的图片。处理模块73,具体用于根据第一张图片与第一目标图片的重叠尺寸,对第一张图片和第一目标图片进行合成处理,得到第一合成图片;并根据第二张图片与第一合成图片的重叠尺寸,对第二张图片和第一合成图片进行合成处理,得到第二合成图片,依次类推,直至对第M-1张图片进行合成处理,得到目标合成图片。其中,第一张图片、第二张图片和第M-1张图片均为M-1张图片中的图片,第一目标图片为M张图片中除M-1张图片之外的图片。
在一种可能的实现方式中,处理模块73,具体用于若第一张图片与第一目标图片不存在重叠区域,则根据第一张图片的宽度值或第一目标图片的宽度值,对第一张图片或第一目标图片进行裁切处理,并对裁切处理后的图片与未裁切处理的图片进行合成处理;或者,若第一张图片与第一目标图片存在重叠区域,则根据重叠区域的宽度值,对第一张图片和第一目标图片中的至少一者进行裁切处理,并基于裁切处理后的图片进行合成处理。
本申请实施例提供的图片处理装置能够实现上述方法实施例中图片处理装置实现的各个过程,为避免重复,详细描述这里不再赘述。
本申请实施例中的图片处理装置可以是装置,也可以是图片处理装置中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图片处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供一种图片处理装置,由于在用户需求通过图片处理装置对多张图片进行合成处理时,用户可以在图片处理装置显示包含该多张图片对应的标识的界面中,对该多张图片对应的标识进行输入,以使得图片处理装置可以显示包括有这些图片的另一个界面,从而用户可以在该另一个界面中对这些图片进行输入,以使得图片处理装置可以根据用户的输入调整这些图片中每张图片的显示位置和显示尺寸,并根据这些图片中每张图片的尺寸,对这些图片进行合成处理,得到目标合成图片,而无需用户先通过图片处理装置对这些图片中的每张图片分别进行编辑处理,得到图片合成时所需要的尺寸的图片,然后再对编辑处理后的这些图片进行合成处理才能够得到合成图片,因此节省了用户的操作,从而提高了图片处理装置处理得到一张合成图片的效率。
可选地,本申请实施例还提供一种电子设备,包括处理器,存储器,存储在存储器上并可在所述处理器上运行的程序或指令,该程序或指令被处理器执行时实现上述图片处 理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图17为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、以及处理器110等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图17中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,用户输入单元107,用于在显示第一界面的情况下,接收用户的第一输入,该第一界面中包括N个目标标识,每个目标标识分别指示一张图片,该第一输入为用户对N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N。
显示单元106,用于响应于第一输入,将第一界面更新为第二界面,该第二界面中包括M个目标标识指示的M张图片。
用户输入单元107,还用于接收用户对M张图片的第二输入。
处理器110,用于响应于第二输入,根据M张图片中的每张图片的尺寸,对M张图片进行合成处理,得到目标合成图片。
本申请实施例提供一种电子设备,由于在用户需求通过电子设备对多张图片进行合成处理时,用户可以在电子设备显示包含该多张图片对应的标识的界面中,对该多张图片对应的标识进行输入,以使得电子设备可以显示包括有这些图片的另一个界面,从而用户可以在该另一个界面中对这些图片进行输入,以使得电子设备可以根据用户的输入调整这些图片中每张图片的显示位置和显示尺寸,并根据这些图片中每张图片的尺寸,对这些图片进行合成处理,得到目标合成图片,而无需用户先通过电子设备对这些图片中的每张图片分别进行编辑处理,得到图片合成时所需要的尺寸的图片,然后再对编辑处理后的这些图片进行合成处理才能够得到合成图片,因此节省了用户的操作,从而提高了电子设备处理得到一张合成图片的效率。
可选地,显示单元106,还用于在一个目标标识与另一个目标标识的重叠尺寸大于或等于第一预设阈值的情况下,将第一界面更新为第二界面。
处理器110,还用于在将第一界面更新为第二界面之前,将另一个目标标识指示的图片确定为置顶图片。
显示单元106,还用于在第一预设区域显示另一个目标标识指示的图片,并在第二预设区域显示其他目标标识指示的图片,其他目标标识为M个目标标识中除另一个目标标识之外的标识。
显示单元106,还用于在第一子区域显示一个目标标识指示的图片,并在第二子区域显示其他目标标识中除一个目标标识之外的标识指示的图片。
处理器110,还用于若M张图片中的每两张图片存在重叠区域,则根据每两张图片的 重叠尺寸,对每两张图片中的一张图片对应的重叠部分进行裁切处理,并对裁切处理后的图片与其他图片进行合成处理,得到目标合成图片,其他图片为M张图片中除裁切处理后的图片之外的图片。
处理器110,还用于根据第一张图片与第一目标图片的重叠尺寸,对第一张图片和第一目标图片进行合成处理,得到第一合成图片;并根据第二张图片与第一合成图片的重叠尺寸,对第二张图片和第一合成图片进行合成处理,得到第二合成图片,依次类推,直至对第M-1张图片进行合成处理,得到目标合成图片。
处理器110,还用于若第一张图片与第一目标图片不存在重叠区域,则根据第一张图片的宽度值或第一目标图片的宽度值,对第一张图片或第一目标图片进行裁切处理,并对裁切处理后的图片与未裁切处理的图片进行合成处理;或者,若第一张图片与第一目标图片存在重叠区域,则根据重叠区域的宽度值,对第一张图片和第一目标图片中的至少一者进行裁切处理,并基于裁切处理后的图片进行合成处理。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施 方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (21)

  1. 一种图片处理方法,所述方法包括:
    在显示第一界面的情况下,接收用户的第一输入,所述第一界面中包括N个目标标识,每个目标标识分别指示一张图片,所述第一输入为用户对所述N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N;
    响应于所述第一输入,将所述第一界面更新为第二界面,所述第二界面中包括所述M个目标标识指示的M张图片;
    接收用户对所述M张图片的第二输入;
    响应于所述第二输入,根据所述M张图片中的每张图片的尺寸,对所述M张图片进行合成处理,得到目标合成图片。
  2. 根据权利要求1所述的方法,其中,在M=2的情况下,所述第一输入为用户将一个目标标识拖动至另一个目标标识的输入;
    在M大于2的情况下,所述第一输入包括第一子输入和第二子输入;
    其中,所述第一子输入为用户对所述M个目标标识的选择输入,所述第二子输入为用户将所述M个目标标识中的一个目标标识拖动至所述M个目标标识中的另一个目标标识的输入;或者,
    所述第一子输入为用户对所述N个目标标识中的M-1个目标标识的选择输入,所述第二子输入为用户将所述M-1个目标标识中的一个目标标识拖动至另一个目标标识的输入,或者,所述第二子输入为用户将另一个目标标识拖动至所述M-1个目标标识中的一个目标标识的输入;所述另一个目标标识为所述N个目标标识中除所述M-1个目标标识之外的标识。
  3. 根据权利要求2所述的方法,其中,所述将所述第一界面更新为第二界面,包括:
    在所述一个目标标识与所述另一个目标标识的重叠尺寸大于或等于第一预设阈值的情况下,将所述第一界面更新为所述第二界面。
  4. 根据权利要求2或3所述的方法,其中,所述第二界面中包括第一预设区域和第二预设区域;所述第一预设区域用于显示置顶图片,在任意图片与所述置顶图片存在重叠区域的情况下,在所述重叠区域内所述置顶图片覆盖所述任意图片;
    所述将所述第一界面更新为第二界面之前,所述方法还包括:
    将所述另一个目标标识指示的图片确定为所述置顶图片;
    所述将所述第一界面更新为第二界面,包括:
    在所述第一预设区域显示所述另一个目标标识指示的图片,并在所述第二预设区域显示其他目标标识指示的图片,所述其他目标标识为所述M个目标标识中除所述另一个目标标识之外的标识。
  5. 根据权利要求4所述的方法,其中,所述第二预设区域包括第一子区域和第二子区域;
    所述在所述第二预设区域显示其他目标标识指示的图片,包括:
    在所述第一子区域显示所述一个目标标识指示的图片,并在所述第二子区域显示所述其他目标标识中除所述一个目标标识之外的标识指示的图片。
  6. 根据权利要求1所述的方法,其中,所述根据所述M张图片中的每张图片的尺寸,对所述M张图片进行合成处理,得到目标合成图片,包括:
    若所述M张图片中的每两张图片存在重叠区域,则根据所述每两张图片的重叠尺寸,对所述每两张图片中的一张图片对应的重叠部分进行裁切处理,并对裁切处理后的图片与其他图片进行合成处理,得到所述目标合成图片,所述其他图片为所述M张图片中除裁切处理后的图片之外的图片。
  7. 根据权利要求1所述的方法,其中,所述第二输入包括M-1个子输入,每个子输入为用户对M-1张图片中的一张图片的拖动输入,所述M-1张图片为所述M张图片中的图片;
    所述根据所述M张图片中的每张图片的尺寸,对所述M张图片进行合成处理,得到目标合成图片,包括:
    根据第一张图片与第一目标图片的尺寸,对所述第一张图片和所述第一目标图片进行合成处理,得到第一合成图片;并根据第二张图片与所述第一合成图片的尺寸,对所述第二张图片和所述第一合成图片进行合成处理,得到第二合成图片,依次类推,直至对第M-1张图片进行合成处理,得到所述目标合成图片;
    其中,所述第一张图片、所述第二张图片和所述第M-1张图片均为所述M-1张图片中的图片,所述第一目标图片为所述M张图片中除所述M-1张图片之外的图片。
  8. 根据权利要求7所述的方法,其中,所述根据第一张图片与第一目标图片的尺寸,对所述第一张图片和所述第一目标图片进行合成处理,包括:
    若所述第一张图片与所述第一目标图片不存在重叠区域,则根据所述第一张图片的宽度值或所述第一目标图片的宽度值,对所述第一张图片或所述第一目标图片进行裁切处理,并对裁切处理后的图片与未裁切处理的图片进行合成处理;
    若所述第一张图片与所述第一目标图片存在重叠区域,则根据所述重叠区域的宽度值,对所述第一张图片和所述第一目标图片中的至少一者进行裁切处理,并基于裁切处理后的图片进行合成处理。
  9. 一种图片处理装置,所述图片处理装置包括:接收模块、更新模块和处理模块;
    所述接收模块,用于在显示第一界面的情况下,接收用户的第一输入,所述第一界面中包括N个目标标识,每个目标标识分别指示一张图片,所述第一输入为用户对所述N个目标标识中的M个目标标识的输入,N和M均为大于1的整数,且M小于或等于N;
    所述更新模块,用于响应于所述接收模块接收的所述第一输入,将所述第一界面更新为第二界面,所述第二界面中包括所述M个目标标识指示的M张图片;
    所述接收模块,还用于接收用户对所述M张图片的第二输入;
    所述处理模块,用于响应于所述接收模块接收的所述第二输入,根据所述M张图片中的每张图片的尺寸,对所述M张图片进行合成处理,得到目标合成图片。
  10. 根据权利要求9所述的图片处理装置,其中,在M=2的情况下,所述第一输入为用户将一个目标标识拖动至另一个目标标识的输入;
    在M大于2的情况下,所述第一输入包括第一子输入和第二子输入;
    其中,所述第一子输入为用户对所述M个目标标识的选择输入,所述第二子输入 为用户将所述M个目标标识中的一个目标标识拖动至所述M个目标标识中的另一个目标标识的输入;或者,
    所述第一子输入为用户对所述N个目标标识中的M-1个目标标识的选择输入,所述第二子输入为用户将所述M-1个目标标识中的一个目标标识拖动至另一个目标标识的输入,或者,所述第二子输入为用户将另一个目标标识拖动至所述M-1个目标标识中的一个目标标识的输入;所述另一个目标标识为所述N个目标标识中除所述M-1个目标标识之外的标识。
  11. 根据权利要求10所述的图片处理装置,其中,所述更新模块,具体用于在所述一个目标标识与所述另一个目标标识的重叠尺寸大于或等于第一预设阈值的情况下,将所述第一界面更新为所述第二界面。
  12. 根据权利要求10或11所述的图片处理装置,其中,所述第二界面中包括第一预设区域和第二预设区域;所述第一预设区域用于显示置顶图片,在任意图片与所述置顶图片存在重叠区域的情况下,在所述重叠区域内所述置顶图片覆盖所述任意图片;
    所述图片处理装置还包括:确定模块;
    所述确定模块,用于在所述更新模块将所述第一界面更新为第二界面之前,将所述另一个目标标识指示的图片确定为所述置顶图片;
    所述更新模块,具体用于在所述第一预设区域显示所述另一个目标标识指示的图片,并在所述第二预设区域显示其他目标标识指示的图片,所述其他目标标识为所述M个目标标识中除所述另一个目标标识之外的标识。
  13. 根据权利要求12所述的图片处理装置,其中,所述第二预设区域包括第一子区域和第二子区域;
    所述更新模块,具体用于在所述第一子区域显示所述一个目标标识指示的图片,并在所述第二子区域显示所述其他目标标识中除所述一个目标标识之外的标识指示的图片。
  14. 根据权利要求9所述的图片处理装置,其中,所述处理模块,具体用于若所述M张图片中的每两张图片存在重叠区域,则根据所述每两张图片的重叠尺寸,对所述每两张图片中的一张图片对应的重叠部分进行裁切处理,并对裁切处理后的图片与其他图片进行合成处理,得到所述目标合成图片,所述其他图片为所述M张图片中除裁切处理后的图片之外的图片。
  15. 根据权利要求9所述的图片处理装置,其中,所述第二输入包括M-1个子输入,每个子输入为用户对M-1张图片中的一张图片的拖动输入,所述M-1张图片为所述M张图片中的图片;
    所述处理模块,具体用于根据第一张图片与第一目标图片的尺寸,对所述第一张图片和所述第一目标图片进行合成处理,得到第一合成图片;并根据第二张图片与所述第一合成图片的尺寸,对所述第二张图片和所述第一合成图片进行合成处理,得到第二合成图片,依次类推,直至对第M-1张图片进行合成处理,得到所述目标合成图片;
    其中,所述第一张图片、所述第二张图片和所述第M-1张图片均为所述M-1张图 片中的图片,所述第一目标图片为所述M张图片中除所述M-1张图片之外的图片。
  16. 根据权利要求15所述的图片处理装置,其中,所述处理模块,具体用于若所述第一张图片与所述第一目标图片不存在重叠区域,则根据所述第一张图片的宽度值或所述第一目标图片的宽度值,对所述第一张图片或所述第一目标图片进行裁切处理,并对裁切处理后的图片与未裁切处理的图片进行合成处理;或者,若所述第一张图片与所述第一目标图片存在重叠区域,则根据所述重叠区域的宽度值,对所述第一张图片和所述第一目标图片中的至少一者进行裁切处理,并基于裁切处理后的图片进行合成处理。
  17. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至8中任一项所述的图片处理方法的步骤。
  18. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至8中任一项所述的图片处理方法的步骤。
  19. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至8中任一项所述的图片处理方法的步骤。
  20. 一种计算机程序产品,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1至8中任一项所述的图片处理方法的步骤。
  21. 一种图片处理设备,所述图片处理设备包括图片处理装置被配置成用于执行如权利要求1至8中任一项所述的图片处理方法。
PCT/CN2021/099182 2020-06-11 2021-06-09 图片处理方法、装置及电子设备 WO2021249436A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022576090A JP2023529219A (ja) 2020-06-11 2021-06-09 ピクチャ処理方法、装置及び電子機器
EP21822401.2A EP4160522A4 (en) 2020-06-11 2021-06-09 IMAGE PROCESSING METHOD AND DEVICE AND ELECTRONIC DEVICE
US18/078,887 US20230106434A1 (en) 2020-06-11 2022-12-09 Picture processing method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010531536.1A CN111833247B (zh) 2020-06-11 2020-06-11 图片处理方法、装置及电子设备
CN202010531536.1 2020-06-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/078,887 Continuation US20230106434A1 (en) 2020-06-11 2022-12-09 Picture processing method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2021249436A1 true WO2021249436A1 (zh) 2021-12-16

Family

ID=72897636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/099182 WO2021249436A1 (zh) 2020-06-11 2021-06-09 图片处理方法、装置及电子设备

Country Status (5)

Country Link
US (1) US20230106434A1 (zh)
EP (1) EP4160522A4 (zh)
JP (1) JP2023529219A (zh)
CN (1) CN111833247B (zh)
WO (1) WO2021249436A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112783379B (zh) * 2019-11-04 2023-07-28 华为终端有限公司 一种选择图片的方法和电子设备
CN111833247B (zh) * 2020-06-11 2024-06-14 维沃移动通信有限公司 图片处理方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140325439A1 (en) * 2013-04-24 2014-10-30 Samsung Electronics Co., Ltd. Method for outputting image and electronic device thereof
US20150067554A1 (en) * 2013-09-02 2015-03-05 Samsung Electronics Co., Ltd. Method and electronic device for synthesizing image
CN106484251A (zh) * 2016-06-30 2017-03-08 北京金山安全软件有限公司 一种图片处理的方法、装置及电子设备
CN110084871A (zh) * 2019-05-06 2019-08-02 珠海格力电器股份有限公司 图像排版方法及装置、电子终端
CN110490808A (zh) * 2019-08-27 2019-11-22 腾讯科技(深圳)有限公司 图片拼接方法、装置、终端及存储介质
CN111124231A (zh) * 2019-12-26 2020-05-08 维沃移动通信有限公司 图片生成方法及电子设备
CN111833247A (zh) * 2020-06-11 2020-10-27 维沃移动通信有限公司 图片处理方法、装置及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050069225A1 (en) * 2003-09-26 2005-03-31 Fuji Xerox Co., Ltd. Binding interactive multichannel digital document system and authoring tool
JP2011119974A (ja) * 2009-12-03 2011-06-16 Sony Corp パノラマ画像合成装置、パノラマ画像合成方法、及びプログラム
US10088989B2 (en) * 2014-11-18 2018-10-02 Duelight Llc System and method for computing operations based on a first and second user input
KR20140122952A (ko) * 2013-04-11 2014-10-21 삼성전자주식회사 이미지 합성 방법 및 이를 구현하는 전자 장치
US9185284B2 (en) * 2013-09-06 2015-11-10 Qualcomm Incorporated Interactive image composition
CN105100642B (zh) * 2015-07-30 2018-11-20 努比亚技术有限公司 图像处理方法和装置
CN108399038A (zh) * 2018-01-17 2018-08-14 链家网(北京)科技有限公司 一种图片合成方法及移动终端
JP2019126528A (ja) * 2018-01-24 2019-08-01 コニカミノルタ株式会社 画像処理装置、放射線撮影システム、放射線長尺画像撮影方法及びプログラム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140325439A1 (en) * 2013-04-24 2014-10-30 Samsung Electronics Co., Ltd. Method for outputting image and electronic device thereof
US20150067554A1 (en) * 2013-09-02 2015-03-05 Samsung Electronics Co., Ltd. Method and electronic device for synthesizing image
CN106484251A (zh) * 2016-06-30 2017-03-08 北京金山安全软件有限公司 一种图片处理的方法、装置及电子设备
CN110084871A (zh) * 2019-05-06 2019-08-02 珠海格力电器股份有限公司 图像排版方法及装置、电子终端
CN110490808A (zh) * 2019-08-27 2019-11-22 腾讯科技(深圳)有限公司 图片拼接方法、装置、终端及存储介质
CN111124231A (zh) * 2019-12-26 2020-05-08 维沃移动通信有限公司 图片生成方法及电子设备
CN111833247A (zh) * 2020-06-11 2020-10-27 维沃移动通信有限公司 图片处理方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4160522A4 *

Also Published As

Publication number Publication date
EP4160522A1 (en) 2023-04-05
CN111833247B (zh) 2024-06-14
JP2023529219A (ja) 2023-07-07
EP4160522A4 (en) 2023-11-29
US20230106434A1 (en) 2023-04-06
CN111833247A (zh) 2020-10-27

Similar Documents

Publication Publication Date Title
WO2021249436A1 (zh) 图片处理方法、装置及电子设备
US20140123042A1 (en) Mobile terminal and method for controlling the same
CN112165553B (zh) 图像生成方法、装置、电子设备及计算机可读存储介质
US20240031668A1 (en) Photographing interface display method and apparatus, electronic device, and medium
EP4207737A1 (en) Video shooting method, video shooting apparatus, and electronic device
US11972274B2 (en) Application management method and apparatus, and electronic device
CN104220975A (zh) 手势响应图像捕捉控制和/或对图像的操作
CN112714255A (zh) 拍摄方法、装置、电子设备及可读存储介质
CN112954210A (zh) 拍照方法、装置、电子设备及介质
CN112584043B (zh) 辅助对焦方法、装置、电子设备及存储介质
CN108781254A (zh) 拍照预览方法、图形用户界面及终端
CN111190677A (zh) 信息显示方法、信息显示装置及终端设备
CN112929566B (zh) 显示控制方法、装置、电子设备及介质
CN112822394B (zh) 显示控制方法、装置、电子设备及可读存储介质
WO2021238721A1 (zh) 图片显示方法及装置
CN111625166B (zh) 图片显示方法及装置
CN114518822A (zh) 应用图标管理方法、装置和电子设备
WO2024041468A1 (zh) 文件的处理方法、装置、电子设备和可读存储介质
CN111885298B (zh) 图像处理方法及装置
WO2023185701A1 (zh) 一种显示方法及其装置、电子设备和可读存储介质
CN111638844A (zh) 截屏方法、装置及电子设备
CN112328149B (zh) 图片格式的设置方法、装置及电子设备
CN111966259B (zh) 截图方法、装置及电子设备
CN114845171A (zh) 视频编辑方法、装置及电子设备
CN112765620A (zh) 显示控制方法、装置、电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21822401

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022576090

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021822401

Country of ref document: EP

Effective date: 20221228