CN112449110A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112449110A
CN112449110A CN202011224328.3A CN202011224328A CN112449110A CN 112449110 A CN112449110 A CN 112449110A CN 202011224328 A CN202011224328 A CN 202011224328A CN 112449110 A CN112449110 A CN 112449110A
Authority
CN
China
Prior art keywords
image
target
user
template
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011224328.3A
Other languages
Chinese (zh)
Other versions
CN112449110B (en
Inventor
刘耘彰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011224328.3A priority Critical patent/CN112449110B/en
Publication of CN112449110A publication Critical patent/CN112449110A/en
Application granted granted Critical
Publication of CN112449110B publication Critical patent/CN112449110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, belongs to the technical field of communication, and can solve the problem that a user cannot define a jigsaw template in the related technology. The method comprises the following steps: identifying a target picture, and generating a target template, wherein the target template comprises N areas, and each area corresponds to an area number; adding M first images on M areas of the target template according to the image numbers of the M first images, adding one first image in each area, wherein the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N; and carrying out image splicing on all the images in the target template to generate a second image. The embodiment of the application is applied to a scene that a user uses electronic equipment to puzzle pictures.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image processing method and device and electronic equipment.
Background
Along with the continuous improvement of the electronic equipment effect of shooing, more and more users choose to use electronic equipment to shoot to share after handling the photo of shooing. The photo jigsaw is a type of picture processing, and can enable a user to present a plurality of photos in a specific arrangement mode in one picture, so that the aesthetic feeling and integrity of the photos can be improved, and the photos are convenient to share.
In the related art, a user may select a favorite jigsaw template in an image processing APP installed on an electronic device, then select a photo desired to be jigsaw from an album, and the electronic device adds the photo selected by the user to the jigsaw template and generates a new picture.
However, in the above process, the user can only use the fixed template provided by the image processing APP, and cannot customize the template.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that a user cannot customize a jigsaw template in the related art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: identifying a target picture, and generating a target template, wherein the target template comprises N areas, and each area corresponds to an area number; adding M first images on M areas of the target template according to the image numbers of the M first images, adding one first image in each area, wherein the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N; and carrying out image splicing on all the images in the target template to generate a second image.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including: the device comprises an identification module, a generation module and an addition module; the identification module is used for identifying a target picture; the generating module is used for generating a target template; the target template comprises N areas, and each area corresponds to an area number; the adding module is used for adding M first images on M areas of the target template generated by the generating module according to the image numbers of the M first images, one first image is added in one area, and the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N; and the generating module is also used for carrying out image splicing on all the images in the target template to generate a second image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the target template containing N numbered areas is generated by identifying the hand-drawn picture of the user. Then, the user can add the M first images with numbers into M areas of the N areas with corresponding numbers in the target template, and finally, all the images contained in the target template are spliced to generate a spliced second image, so that the user can customize the jigsaw template according to own preferences when editing photos by using the jigsaw application.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is one of schematic diagrams of an interface applied by an image processing method according to an embodiment of the present application;
fig. 3 is a second schematic diagram of an interface applied by an image processing method according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the application can be applied to a scene in which a user uses electronic equipment to puzzle pictures.
For example, for a scenario in which a user uses an electronic device to puzzle a photo, after taking multiple photos, the user wants to post the taken photos to a social platform, and the photos are usually processed. The jigsaw is used as a mode of picture processing, and allows a user to present a plurality of photos in a specific arrangement mode in one image, so that the aesthetic feeling and the integrity of the photos are increased, and the photos are convenient to share. In the related art, when a user wants to splice a shot photo into one photo, the user may open an album in an image processing APP installed on an electronic device, select the photo that wants to be spliced, select a favorite splicing template, and then the electronic device adds the photo selected by the user to the splicing template and generates a new photo.
To solve the problem, in the technical solution provided in the embodiment of the present application, a user manually draws a jigsaw template picture on an electronic device, the electronic device identifies the style information in the jigsaw template picture, and generates a user-defined template according to the identified style information, and then adds a plurality of photos selected by the user to the user-defined template to generate a spliced jigsaw. The user may drag the picture on the puzzle template to change the display position of the picture. For the electronic equipment with a small screen and inconvenience for the user to manually draw the jigsaw template picture, the technical scheme provided by the application embodiment can also allow the user to manually draw the jigsaw template picture on paper or a drawing board, and then the electronic equipment can shoot the jigsaw template picture manually drawn by the user and identify the style information in the jigsaw template picture. Therefore, when the user edits the photo by using the jigsaw application, the jigsaw template can be customized according to the preference of the user.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an image processing method provided in an embodiment of the present application may include the following steps 201 to 203:
step 201, the image processing device identifies a target picture and generates a target template.
Wherein, above-mentioned template pattern includes: n regions and region numbers allocated to each region, wherein one region corresponds to one number, and N is a positive integer.
For example, the target picture may be a pattern that a user hand-draws in a drawing area provided by the electronic device, or a pattern that the user hand-draws on paper or a drawing board and that is photographed by the electronic device through a camera.
Illustratively, the target picture includes N regions, each region includes a corresponding number, and the numbers of the N regions are not repeated. And the electronic equipment identifies the target picture and obtains a template style for generating the target template according to the information in the target picture. The template pattern includes the information in the target picture.
For example, as shown in fig. 2, a template picture 20 is drawn by a user, where the picture 20 includes 5 regions (including a region 1, a region 2, a region 3, a region 4, and a region 5) for indicating the placement position of the photo, and each region has a corresponding number (the numbers corresponding to the regions 1 to 5 are: 1, 2, 3, 4, and 5, respectively). The electronic device can recognize the information in the picture 2 to obtain a template pattern for generating a user-defined jigsaw template.
For example, if the template picture hand-drawn by the user further includes a background picture, the electronic device may directly generate the target template according to the template picture hand-drawn by the user, and if the template picture hand-drawn by the user does not include the background picture, the electronic device may allow the user to select a picture from an album, a picture favorite, or a jigsaw application as the background picture of the target template.
Illustratively, after the electronic device obtains the style information of the template style by recognizing the target picture, the electronic device generates the target template according to the style information. Thereby enabling the electronic device to add the plurality of photos selected by the user to the target template.
For example, the target template may be understood as a picture generated by the electronic device according to the style information and used for splicing with a plurality of photos selected by the user.
For example, after the electronic device generates the target template, the target is stored in the electronic device, and the electronic device may also share the target template with other electronic devices.
In step 202, the image processing apparatus adds M first images to M regions of the target template according to the image numbers of the M first images.
Adding a first image into one area, wherein the area number of each area corresponds to the image number of the first image added into the area; the M regions are: m regions with region numbers corresponding to the image numbers of the M first images in the N regions; m is less than or equal to N.
Illustratively, the M first images may be M photos selected by the user from the album. Each of the M first images corresponds to a number. The electronic device adds a target image of the M images to a target area having the same number as the target image. The target area is one of the N areas.
It is to be understood that the area number of each area described above coincides with the number of the first image added in the area, for example, when the area numbers are 1, 2, and 3, the number of the first image added correspondingly may be 1, 2, and 3, or when the area numbers are 1, 2, and 3, the number of the first image added correspondingly may be one, two, and three.
For example, after the target image is added to the corresponding region, the image processing apparatus may adjust the size of the target image to fit the size of the target region by scaling or cropping according to the size of the target region. Meanwhile, the image processing apparatus may also arrange the plurality of first images in the arrangement relationship of the respective corresponding regions in the target template without adjusting the size of the target image, that is, without changing the size of the first image, by adding only the target image to the position of the target region.
And step 203, the image processing device performs image splicing on all the images in the target template to generate a second image.
Illustratively, after the electronic device adds the M first images selected by the user to the corresponding positions of the target template, the electronic device performs image stitching between the M first images and the target template to generate a new picture (i.e., the second image). The user may then share the second image into a social application.
Thus, by recognizing the hand-drawn picture of the user, the target template including N numbered regions is generated. Then, the user can add the M first images with numbers into M areas of the N areas with corresponding numbers in the target template, and finally, all the images contained in the target template are spliced to generate a spliced second image, so that the user can customize the jigsaw template according to own preferences when editing photos by using the jigsaw application.
Optionally, in this embodiment of the application, after the user adds the photo to the template, if the user wants to adjust the placement position of the photo in the template, the position of the photo in the template may be changed by dragging the photo displayed in the template.
After the step 203, the image processing method provided in the embodiment of the present application may further include the following steps 203a1 and 203a 2:
in step 203a1, the image processing apparatus receives a first input from the user to a target image in the M first images.
The target image is at least one of the M first images.
In step 203a2, the image processing apparatus adjusts the area of the target image displayed in the target template in response to the first input.
Illustratively, the first input may be: the touch input of the user to the target image in the M first images, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present invention. Illustratively, the touch input may be a drag input of the target image by the user.
Illustratively, the electronic device moves the target image from the current position to a new position after receiving a first input of the target image by the user.
The target image is limited to moving within the N regions, that is, the target image is limited to moving from one of the N regions to another of the N regions, and cannot move to a region other than the N regions.
In this way, when the user is not satisfied with the current placement position of the photo, the user can drag the position of the target image to adjust the position of the photo in the target template.
Further optionally, in this embodiment of the application, after the user adjusts the photo to a new position, if there is one photo before the new position, the electronic device may swap the positions of the two photos.
Illustratively, the target image is displayed in a first area of the target template, and the first input is used to adjust the area of the target image displayed in the target template to a second area.
Illustratively, the step 203a2 may include the following steps 203a 21:
step 203a 21: when a third image is displayed in the second area, the image processing apparatus changes the position of the target image in the first area and the position of the third image in the second area.
Illustratively, the electronic device adjusts the position of the target image from the first area to the second area and adjusts the position of a third image displayed before the second area from the second area to the first area after receiving a first input that the user drags the target image.
For example, referring to fig. 2, as shown in fig. 3, an interface 30 after 5 images are added to the target template for the electronic device is shown, where the tree in fig. 3 is a background image of the target template, an image 2 is displayed in an area 2, and an image 5 is displayed in an area 5. When the user wants to adjust the image 2 in the interface 30 to the area 5, the electronic device moves the image 2 into the area 5 and moves the image 5 into the area 2.
Therefore, the user can exchange the positions of the two images in the template by dragging the images displayed in the template, and further adjust the placing layout of the photos.
Optionally, in this embodiment of the application, since the jigsaw template includes the position information of the placement position of the photo and the background picture, in general, the customized template hand-drawn by the user includes only simple position information, and therefore, the electronic device may allow the user to select a picture as the background picture of the jigsaw template.
Illustratively, the target picture includes: and position information indicating positions of the N regions in the target template.
For example, in step 202, before the image processing apparatus generates the target template, the embodiment of the present application provides an image processing method, which may further include the following step 202 a:
step 202a, the image processing apparatus receives a second input that the user selects the fourth image.
Illustratively, after the user selects the third image, the step 202 may include the following step 202 b:
in step 202b, the image processing apparatus generates the target template by using the fourth image as the template background image of the target template in response to the second input.
The fourth image may be an image in a user's album or favorite, or an image provided by a puzzle application of the electronic device.
Illustratively, the template frame may be understood as other information than the background picture in the target template, for example, position information (i.e., the above N regions) indicating the display position of the first image, and information such as the display range of the background picture.
In this way, in the case where the background picture is not included in the target template, the user can select a picture that the user likes from the album as the background picture of the target template.
Optionally, in this embodiment of the application, the target picture used for generating the target template may be a picture taken by the user, and may also be an image hand-drawn by the user.
Before the step 201, the image processing method provided in the embodiment of the present application may further include the following steps 201a1 and 201a 2:
step 201a1, the image processing apparatus receives a third input of the user on the screen.
In step 201a2, the image processing apparatus responds to the third input, draws a pattern according to the sliding trajectory of the third input, and generates the target picture according to the pattern.
Wherein, the area number corresponding to each of the N areas is obtained based on the pattern.
Illustratively, the image processing apparatus may generate a pattern according to a sliding trajectory of the user on the screen after receiving a third input of the user drawing an image on the screen, and then generate the target picture according to the generated pattern.
For example, the user draws 5 rectangular areas as shown in fig. 2 on the screen of the electronic device, and draws a number in each area. And finally, the electronic equipment generates a hand-drawn jigsaw template according to the 5 rectangular areas and the numbers drawn by the user by hand. Meanwhile, the user can add a background picture to the hand-drawn jigsaw template according to the methods in the above steps 202a and 202 b.
In this way, the user can generate the jigsaw template by hand drawing on the screen of the electronic device.
Optionally, in this embodiment of the application, the user may hand-draw the template picture in the drawing interface provided by the electronic device, however, for the electronic device with a smaller screen, the user is inconvenient to hand-draw the template picture on the electronic device. Therefore, the image processing method provided by the embodiment of the application allows a user to manually draw a template picture on paper or a drawing board, and then the electronic device can acquire the template picture through the camera.
Before the step 201, the image processing method provided in the embodiment of the present application may further include the following step 201 b:
step 201b, the image processing device obtains a target picture obtained by shooting a hand-drawn pattern of the user by the camera.
Wherein, the area number corresponding to each of the N areas is obtained based on the pattern.
Illustratively, the user hand-drawn pattern is a pattern that the user hand-draws on other electronic devices or carriers besides the electronic device. The pattern has the same effect and effect as a template picture drawn by a user on an electronic device.
So, less at the electronic equipment screen, under the inconvenient condition of hand-drawing template pattern on the screen of user, the user can be after the hand-drawing pattern on paper, and reuse electronic equipment takes a picture and scans, and then makes electronic equipment can generate the target template according to the template picture that the user hand-drawn.
According to the image processing method provided by the embodiment of the application, after the electronic equipment acquires the hand-drawn jigsaw template picture of the user, the style information in the jigsaw template picture is identified, the target template is generated according to the identified style information, and then the M first images selected by the user are added into the target template to generate the spliced second image. In the case that the user is not satisfied with the placement position of the first image, the electronic device may further adjust the display position of the target image after receiving the first input of the user. For the electronic equipment with a small screen and inconvenience for the user to manually draw the jigsaw template picture, the electronic equipment can also shoot the jigsaw template picture manually drawn on paper or a drawing board by the user to obtain the target picture. Therefore, when the user edits the photo by using the jigsaw application, the jigsaw template can be customized according to the preference of the user.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
In the embodiments of the present application, the above-described methods are illustrated in the drawings. The image processing method is exemplarily described with reference to one of the drawings in the embodiments of the present application. In specific implementation, the image processing methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
Fig. 4 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application, and as shown in fig. 4, the image processing apparatus 600 includes: an identification module 601, a generation module 602, and an addition module 603, wherein:
the identification module is used for identifying a target picture; the generating module is used for generating a target template; the target template comprises N areas, and each area corresponds to an area number; the adding module is used for adding the M first images on the M areas of the target template generated by the generating module according to the image numbers of the M first images, one first image is added in one area, and the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N; the generating module is further configured to perform image stitching on all the images in the target template to generate a second image.
Optionally, the electronic device 600 further includes: a receiving module 604 and a display module 605; a receiving module 604, configured to receive a first input of a target image in the M first images from a user; a display module 605, configured to adjust a region of the target image displayed in the target template in response to the first input received by the receiving module 604.
Optionally, the receiving module 604 is configured to receive a second input that the user selects the fourth image; the generating module 602 is specifically configured to generate the target template by taking the fourth image as the template background image of the target template in response to the second input received by the receiving module 604.
Optionally, a receiving module 604, configured to receive a third input of the user on the screen; the generating module 602 is further configured to, in response to the third input received by the receiving module 604, draw a pattern according to the sliding trajectory of the third input, and generate a target picture according to the pattern; wherein, the area number corresponding to each area in the N areas is obtained based on the pattern.
Optionally, the electronic device 600 further includes: an acquisition module 606; an obtaining module 606, configured to obtain a target picture obtained by shooting a user hand-drawn pattern by a camera; wherein, the area number corresponding to each area in the N areas is obtained based on the pattern.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. Illustratively, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 3, and is not described herein again to avoid repetition.
In the image processing apparatus provided in the embodiment of the present application, after obtaining a hand-drawn jigsaw template picture of a user, an electronic device identifies style information in the jigsaw template picture, generates a target template according to the identified style information, and then adds M first images selected by the user to the target template to generate a spliced second image. In the case that the user is not satisfied with the placement position of the first image, the electronic device may further adjust the display position of the target image after receiving the first input of the user. For the electronic equipment with a small screen and inconvenience for the user to manually draw the jigsaw template picture, the electronic equipment can also shoot the jigsaw template picture manually drawn on paper or a drawing board by the user to obtain the target picture. Therefore, when the user edits the photo by using the jigsaw application, the jigsaw template can be customized according to the preference of the user.
Optionally, as shown in fig. 5, an electronic device M00 is further provided in this embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 is configured to identify a target picture and generate a target template; a display unit 106, configured to add M first images to M regions of the target template generated by the generation module according to the image numbers of the M first images, where one first image is added to one region; adding a first image to one area, wherein the area number of each area corresponds to the image number of the first image added to the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N; the processor 110 is further configured to perform image stitching on all the images in the target template to generate a second image.
Thus, by recognizing the hand-drawn picture of the user, the target template including N numbered regions is generated. Then, the user can add the M first images with numbers into M areas of the N areas with corresponding numbers in the target template, and finally, all the images contained in the target template are spliced to generate a spliced second image, so that the user can customize the jigsaw template according to own preferences when editing photos by using the jigsaw application.
Optionally, a user input unit 107 for receiving a first input of a user to a target image of the M first images; a display unit 106 for adjusting an area of the target image displayed in the target template in response to the first input received by the user input unit 107.
In this way, when the user is not satisfied with the current placement position of the photo, the user can drag the position of the target image to adjust the position of the photo in the target template.
Optionally, a user input unit 107 for receiving a second input of a user selecting the fourth image; the processor 110 is specifically configured to generate the target template by using the fourth image as a template background of the target template in response to the second input received by the user input unit 107.
In this way, in the case where the background picture is not included in the target template, the user can select a picture that the user likes from the album as the background picture of the target template.
Optionally, a user input unit 107 for receiving a third input of the user on the screen; the processor 110 is further configured to respond to a third input received by the user input unit 107, draw a pattern according to a sliding track of the third input, and generate a target picture according to the pattern; wherein, the area number corresponding to each area in the N areas is obtained based on the pattern.
In this way, the user can generate the jigsaw template by hand drawing on the screen of the electronic device.
Optionally, the input unit 104 is configured to obtain a target picture obtained by shooting a user's hand-drawn pattern by a camera; wherein, the area number corresponding to each area in the N areas is obtained based on the pattern.
So, less at the electronic equipment screen, under the inconvenient condition of hand-drawing template pattern on the screen of user, the user can be after the hand-drawing pattern on paper, and reuse electronic equipment takes a picture and scans, and then makes electronic equipment can generate the target template according to the template picture that the user hand-drawn.
According to the electronic device provided by the embodiment of the application, after the electronic device acquires the hand-drawn jigsaw template picture of the user, the style information in the jigsaw template picture is identified, the target template is generated according to the identified style information, and then the M first images selected by the user are added into the target template to generate the spliced second image. In the case that the user is not satisfied with the placement position of the first image, the electronic device may further adjust the display position of the target image after receiving the first input of the user. For the electronic equipment with a small screen and inconvenience for the user to manually draw the jigsaw template picture, the electronic equipment can also shoot the jigsaw template picture manually drawn on paper or a drawing board by the user to obtain the target picture. Therefore, when the user edits the photo by using the jigsaw application, the jigsaw template can be customized according to the preference of the user.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
identifying a target picture, and generating a target template, wherein the target template comprises N areas, and each area corresponds to an area number;
adding the M first images on M areas of the target template according to the image numbers of the M first images, adding one first image on one area, wherein the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N;
and carrying out image splicing on all the images in the target template to generate a second image.
2. The method according to claim 1, wherein after adding the M first images on the M regions of the target template in terms of their image numbers, the method further comprises:
receiving a first input of a user to a target image in the M first images;
in response to the first input, adjusting an area of the target image displayed in the target template.
3. The method of claim 1, wherein prior to the generating the target template, the method further comprises:
receiving a second input of a user selecting a fourth image;
the generating the target template includes:
and responding to the second input, using the fourth image as a template background image of the target template, and generating the target template.
4. The method of claim 1, wherein prior to identifying the target picture, the method comprises:
receiving a third input of the user on the screen;
responding to the third input, drawing a pattern according to the sliding track of the third input, and generating the target picture according to the pattern;
wherein the region number corresponding to each of the N regions is obtained based on the pattern.
5. The method of claim 1, wherein prior to identifying the target picture, the method comprises:
acquiring a target picture obtained by shooting a hand-drawn pattern of a user by a camera;
wherein the region number corresponding to each of the N regions is obtained based on the pattern.
6. An image processing apparatus, characterized in that the apparatus comprises: the device comprises an identification module, a generation module and an addition module;
the identification module is used for identifying a target picture;
the generating module is used for generating a target template; the target template comprises N areas, and each area corresponds to an area number;
the adding module is used for adding the M first images on the M areas of the target template generated by the generating module according to the image numbers of the M first images, one first image is added in one area, and the area number of each area corresponds to the image number of the first image added in the area; the M regions are: m areas with area numbers corresponding to the image numbers of the M first images in the N areas; m is less than or equal to N;
the generating module is further configured to perform image stitching on all the images in the target template to generate a second image.
7. The apparatus of claim 6, further comprising: the device comprises a receiving module and a display module;
the receiving module is used for receiving a first input of a user to a target image in the M first images;
the display module is used for responding to the first input received by the receiving module and adjusting the area of the target image displayed in the target template.
8. The apparatus of claim 6, further comprising: a receiving module;
the receiving module is used for receiving a second input of selecting a fourth image by a user;
the generating module is specifically configured to generate the target template by using the fourth image as a template background image of the target template in response to the second input received by the receiving module.
9. The apparatus of claim 6, further comprising: a receiving module;
the receiving module is used for receiving a third input of the user on the screen;
the generating module is further configured to respond to a third input received by the receiving module, draw a pattern according to a sliding track of the third input, and generate the target picture according to the pattern;
wherein the region number corresponding to each of the N regions is obtained based on the pattern.
10. The apparatus of claim 6, further comprising: an acquisition module;
the acquisition module is used for acquiring a target picture obtained by shooting a hand-drawn pattern of a user by a camera;
wherein the region number corresponding to each of the N regions is obtained based on the pattern.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 5.
CN202011224328.3A 2020-11-05 2020-11-05 Image processing method and device and electronic equipment Active CN112449110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011224328.3A CN112449110B (en) 2020-11-05 2020-11-05 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011224328.3A CN112449110B (en) 2020-11-05 2020-11-05 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112449110A true CN112449110A (en) 2021-03-05
CN112449110B CN112449110B (en) 2022-03-11

Family

ID=74736863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011224328.3A Active CN112449110B (en) 2020-11-05 2020-11-05 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112449110B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596329A (en) * 2021-07-23 2021-11-02 维沃移动通信(杭州)有限公司 Photographing method and photographing apparatus
WO2023030112A1 (en) * 2021-09-03 2023-03-09 北京字跳网络技术有限公司 Collage making method and apparatus, and electronic device and readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
CN103049894A (en) * 2012-09-14 2013-04-17 深圳市万兴软件有限公司 Image processing method and device
CN105278896A (en) * 2014-06-26 2016-01-27 腾讯科技(深圳)有限公司 Image display method and apparatus, and terminal equipment
WO2016107055A1 (en) * 2014-12-30 2016-07-07 中兴通讯股份有限公司 Processing method and device for image splicing
CN106598623A (en) * 2016-12-23 2017-04-26 维沃移动通信有限公司 Picture combination template generation method and mobile terminal
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108200332A (en) * 2017-12-27 2018-06-22 努比亚技术有限公司 A kind of pattern splicing method, mobile terminal and computer readable storage medium
CN109379542A (en) * 2018-10-23 2019-02-22 深圳豪客互联网有限公司 A kind of shooting picture joining method, device and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
CN103049894A (en) * 2012-09-14 2013-04-17 深圳市万兴软件有限公司 Image processing method and device
CN105278896A (en) * 2014-06-26 2016-01-27 腾讯科技(深圳)有限公司 Image display method and apparatus, and terminal equipment
WO2016107055A1 (en) * 2014-12-30 2016-07-07 中兴通讯股份有限公司 Processing method and device for image splicing
CN106598623A (en) * 2016-12-23 2017-04-26 维沃移动通信有限公司 Picture combination template generation method and mobile terminal
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108200332A (en) * 2017-12-27 2018-06-22 努比亚技术有限公司 A kind of pattern splicing method, mobile terminal and computer readable storage medium
CN109379542A (en) * 2018-10-23 2019-02-22 深圳豪客互联网有限公司 A kind of shooting picture joining method, device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596329A (en) * 2021-07-23 2021-11-02 维沃移动通信(杭州)有限公司 Photographing method and photographing apparatus
WO2023030112A1 (en) * 2021-09-03 2023-03-09 北京字跳网络技术有限公司 Collage making method and apparatus, and electronic device and readable medium

Also Published As

Publication number Publication date
CN112449110B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954210A (en) Photographing method and device, electronic equipment and medium
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
US20230345113A1 (en) Display control method and apparatus, electronic device, and medium
CN112449110B (en) Image processing method and device and electronic equipment
CN113179205B (en) Image sharing method and device and electronic equipment
CN112911147A (en) Display control method, display control device and electronic equipment
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113596555B (en) Video playing method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN112822394A (en) Display control method and device, electronic equipment and readable storage medium
CN113362426B (en) Image editing method and image editing device
CN112367487B (en) Video recording method and electronic equipment
CN112333389B (en) Image display control method and device and electronic equipment
CN113885748A (en) Object switching method and device, electronic equipment and readable storage medium
CN113805709A (en) Information input method and device
CN113271378A (en) Image processing method and device and electronic equipment
CN113326233A (en) Method and device for arranging folders
CN113691443B (en) Image sharing method and device and electronic equipment
CN112911060B (en) Display control method, first display control device and first electronic equipment
CN112764632B (en) Image sharing method and device and electronic equipment
CN112492206B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant