CN111461985A - Picture processing method and electronic equipment - Google Patents

Picture processing method and electronic equipment Download PDF

Info

Publication number
CN111461985A
CN111461985A CN202010247507.2A CN202010247507A CN111461985A CN 111461985 A CN111461985 A CN 111461985A CN 202010247507 A CN202010247507 A CN 202010247507A CN 111461985 A CN111461985 A CN 111461985A
Authority
CN
China
Prior art keywords
splicing
target
template
input
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010247507.2A
Other languages
Chinese (zh)
Inventor
彭业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010247507.2A priority Critical patent/CN111461985A/en
Publication of CN111461985A publication Critical patent/CN111461985A/en
Priority to PCT/CN2021/082750 priority patent/WO2021197165A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a picture processing method and electronic equipment, wherein the method comprises the following steps: acquiring at least two pictures to be spliced; acquiring a target segmentation template; according to the target segmentation template, segmenting the picture to be spliced into sub-pictures to be spliced; acquiring a target splicing template; and splicing the sub-pictures to be spliced into the target picture based on the target splicing template. In the embodiment of the invention, when at least two pictures to be spliced are obtained, the target segmentation template is obtained, the pictures can be segmented by the target segmentation template to obtain the sub-pictures to be spliced, and when splicing is needed, the sub-pictures to be spliced are spliced according to the target splicing template to obtain the target pictures. Therefore, the operation steps of segmenting, splicing and combining different pictures can be simplified, the operation complexity is reduced, and the operation convenience is improved.

Description

Picture processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a picture processing method and an electronic device.
Background
In daily life, sharing pictures or communicating by using pictures has become a common social behavior of people, wherein splicing different pictures is a common requirement.
In a common current picture processing application program, when different pictures are spliced, a user needs to browse an album first and divide the pictures respectively by using a cutting and dividing function provided by the application program, then, with part of picture elements obtained by splitting as a carrier, enter the album and select other picture elements obtained by splitting, and finally, perform a splicing operation to obtain a final composite picture.
Therefore, in the existing picture splicing operation process, multiple segmentation operations and selection operations need to be executed for different pictures, so that the steps are more, the operation is more complex, and the convenience is poor.
Disclosure of Invention
The embodiment of the invention provides a picture processing method and electronic equipment, which can solve the problems of more steps, complex operation and poor convenience in the existing picture processing process.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method applied to an electronic device, where the method includes:
acquiring at least two pictures to be spliced;
acquiring a target segmentation template;
according to the target segmentation template, segmenting the picture to be spliced into sub-pictures to be spliced;
acquiring a target splicing template;
and splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
the first acquisition module is used for acquiring at least two pictures to be spliced;
the second acquisition module is used for acquiring a target segmentation template;
the segmentation module is used for segmenting the picture to be spliced into sub-pictures to be spliced according to the target segmentation template;
the third acquisition module is used for acquiring a target splicing template;
and the splicing module is used for splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the picture processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the picture processing method according to the first aspect.
In the embodiment of the invention, when at least two pictures to be spliced are obtained, the target segmentation template is obtained, the pictures can be segmented by the target segmentation template to obtain the sub-pictures to be spliced, and when splicing is needed, the sub-pictures to be spliced are spliced according to the target splicing template to obtain the target pictures. Therefore, the operation steps of segmenting, splicing and combining different pictures can be simplified, the operation complexity is reduced, and the operation convenience is improved.
Drawings
Fig. 1 is a flowchart illustrating a picture processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an interface for capturing pictures according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating picture segmentation according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of picture stitching according to an embodiment of the present invention;
FIG. 5 is a flow chart of another method for processing pictures according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a presentation control provided by an embodiment of the invention;
fig. 7 is a schematic diagram of a preset segmentation template or a preset splicing template according to an embodiment of the present invention;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a block diagram of another electronic device according to an embodiment of the present invention;
fig. 10 is a block diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example one
Referring to fig. 1, a flowchart of a picture processing method according to an embodiment of the present invention is shown, where the method includes the following steps:
step 101, obtaining at least two pictures to be spliced.
In the embodiment of the invention, the method can be used for segmenting and splicing at least two different pictures. When the method is built in a self-contained application program such as an album or a gallery of the electronic equipment, a user can select at least two pictures to be spliced from a picture display interface after opening the local album or gallery. When the method is built in a third-party application program developed by a non-electronic equipment manufacturer such as picture processing and the like, a user can obtain pictures to be spliced at the positions of a local album, a cloud storage space, a webpage link and the like through a picture loading path in the third-party application program after opening the third-party application program.
It should be noted that, when the picture to be stitched is obtained, after the picture is selected, the picture to be stitched can be distinguished from other unselected pictures through highlighting marks, symbol marks, and the like. As shown in fig. 2, a schematic diagram is given of a state in which two pictures P1 and P2 are selected, and a "√" mark is shown in the upper left corner of the pictures. For two or more pictures to be spliced, the pictures can be simultaneously selected when being acquired, so that step-by-step successive selection is not needed. Regarding the picture to be stitched, the picture elements thereof may be landscape pictures, portrait group shots, and the like, which is not limited in the embodiment of the present invention.
And 102, acquiring a target segmentation template.
In the embodiment of the invention, various segmentation templates can be preset and stored for a user to perform picture segmentation, and the user can select a target segmentation template from the segmentation templates to apply to the picture to be spliced.
And 103, dividing the picture to be spliced into sub-pictures to be spliced according to the target division template.
After the target segmentation template is determined, the to-be-spliced picture can be segmented into a plurality of to-be-spliced sub-pictures according to the number of the segmentation regions and the shapes of the segmentation regions provided in the target segmentation template. For example, as shown in FIG. 3, when the target segmentation template is a Sudoku template, the image P1 can be segmented into P1-1, P1-2, P1-3, … … P1-7, P1-8 and P1-9, and similarly for the image P2, the image P1 can be segmented into P2-1, P2-2, P2-3, … … P2-7, P2-8 and P2-9.
And 104, acquiring a target splicing template.
After the segmentation is completed, a user needs to combine and splice elements from different pictures, and needs to obtain a target splicing template so as to combine and splice the elements of different pictures to be spliced according to a designed rule. Similarly, various splicing templates can be preset and stored for users to splice pictures, and the users can select a target splicing template from the splicing templates to apply to the splicing process.
And 105, splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
And filling the sub-pictures to be spliced obtained by segmentation into the splicing region according to the number, the shape and the like of the regions of the target splicing template, and splicing and combining the sub-pictures to be spliced into a new target picture. As shown in fig. 4, based on the squared figure splicing template, a new squared figure wall can be obtained by combining the sub-pictures to be spliced of the picture P1 and the picture P2.
In the embodiment of the invention, when at least two pictures to be spliced are obtained, the target segmentation template is obtained, the pictures can be segmented by the target segmentation template to obtain the sub-pictures to be spliced, and when splicing is needed, the sub-pictures to be spliced are spliced according to the target splicing template to obtain the target pictures. Therefore, the operation steps of segmenting, splicing and combining different pictures can be simplified, the operation complexity is reduced, and the operation convenience is improved.
Example two
Referring to fig. 5, a flowchart of another picture processing method provided in the embodiment of the present invention is shown, where the steps of the method are as follows:
step 201, at least two pictures to be spliced are obtained.
In the embodiment of the invention, the method can be used for segmenting and splicing at least two different pictures. When the method is built in a self-contained application program such as an album or a gallery of the electronic equipment, a user can select at least two pictures to be spliced from a picture display interface after opening the local album or gallery. When the method is built in a third-party application program developed by a non-electronic equipment manufacturer such as picture processing and the like, a user can obtain pictures to be spliced at the positions of a local album, a cloud storage space, a webpage link and the like through a picture loading path in the third-party application program after opening the third-party application program.
It should be noted that, when the picture to be stitched is obtained, after the picture is selected, the picture to be stitched can be distinguished from other unselected pictures through highlighting marks, symbol marks, and the like. As shown in fig. 2, a schematic diagram is given of a state in which two pictures P1 and P2 are selected, and a "√" mark is shown in the upper left corner of the pictures. For two or more pictures to be spliced, the pictures can be simultaneously selected when being acquired, so that step-by-step successive selection is not needed. Regarding the picture to be stitched, the picture elements thereof may be landscape pictures, portrait group shots, and the like, which is not limited in the embodiment of the present invention.
In step 202, a first input from a user is received.
The electronic device may receive gesture information from a user via its display screen, such as: long press information, slide information, multi-finger touch information, pressing force information, and the like; the gaze of the user can be tracked through the camera and the light sensor, and the first input is determined according to the gaze position, the gaze time, the blink frequency and the like; the fingerprint information of different fingers can be identified through the fingerprint module, or the first input is determined according to different fingerprint partitions on one finger; the voice information can also be collected by a microphone and used as the first input. Taking gesture information as an example, the pressing duration and the sliding distance of the user may be monitored, for example, the pressing duration may exceed 3 seconds, and the sliding distance may be greater than 1 centimeter as the first input. Taking the eye control information as an example, the gazing duration of the user at a certain preset position may be monitored, for example, the gazing duration for a certain preset position may exceed 3 seconds as the first input. Similar setting schemes can be adopted for the fingerprint information and the voice information, and are not described in detail.
Therefore, the first input at least comprises at least one input information of gesture information, eye control information, fingerprint information and voice information, and flexible and rich interaction selection and higher interaction experience can be provided for a user.
Step 203, responding to the first input, and displaying at least two preset segmentation templates.
After receiving a first input of a user, a presentation control related to the preset segmentation template display can be called based on the first input. The display control can be suspended on the upper layer of the picture to be spliced and used for displaying at least two preset segmentation templates. The presentation control may be a presentation control that provides a textual description or a window that previews effects of the template. As shown in fig. 6, taking the first input as gesture information as an example, when the user presses the picture to be stitched for up to 5 seconds and slides upwards for 1 cm, a display control may be displayed on the display interface, and the display control may be located at any position on the display interface, on the premise that the operation is not affected. An example of a presentation control is given in fig. 6, and the presentation control of fig. 6 presents different preset segmentation templates, namely two squares, three squares, four squares, six squares and nine squares, to a user in a form of a selection menu through text titles. The multiple different preset segmentation templates can more easily meet different segmentation requirements of users, and steps of performing segmentation operation by using other applications are reduced. Of course, in practice, it is possible to design a preset segmentation template that provides more abundant other patterns as shown in fig. 7, and the present invention does not limit the patterns and the number of the preset segmentation templates.
And step 204, receiving a second input of the user.
For at least two preset segmentation templates provided, a second input of the user is required to be received, and the second input is used for selecting and determining one template from the preset segmentation templates to use. So that it can be determined which preset segmentation template the user selected according to the second input. Specifically, the type of the received second input may refer to the first input provided in step 202, and is not described herein again.
Step 205, in response to the second input, determining a target segmentation template from the preset segmentation templates.
The second input is selection information for the templates in the face of a plurality of different preset segmentation templates. For example, as shown in fig. 6, different preset segmentation templates, namely two squares, three squares, four squares, six squares and nine squares, are displayed, and when the second input is a click operation of the user on a nine square, the nine square template is used as a target segmentation template for segmenting the picture to be stitched. For example, after the user selects the squared figure template as the target segmentation template, the current display interface may be exited and a segmentation preview screen may be displayed. It will be appreciated that the target segmentation template may be any one of the preset segmentation templates, depending on the selection information represented by the second input.
And 206, dividing the picture to be spliced into sub-pictures to be spliced according to the target segmentation template.
After the target segmentation template is determined, the to-be-spliced picture can be segmented into a plurality of to-be-spliced sub-pictures according to the number of the segmentation regions and the shapes of the segmentation regions provided in the target segmentation template. For example, as shown in FIG. 3, when the target segmentation template is a Sudoku template, the image P1 can be segmented into P1-1, P1-2, P1-3, … … P1-7, P1-8 and P1-9, and similarly for the image P2, the image P1 can be segmented into P2-1, P2-2, P2-3, … … P2-7, P2-8 and P2-9.
It should be noted that, although the target segmentation template is provided in the embodiment of the present invention, when the target segmentation template is used to segment the picture to be stitched, the target segmentation template may still receive the control information of the user, so as to segment the picture to be stitched more accurately. For example, for a nine-grid template selected by the user, the user can adjust the dividing lines between the grids by sliding the fingers, so as to adjust the size of the divided area, and match the picture elements as much as possible, so as to avoid the picture elements being improperly split (for example, dividing the human head into two halves). One or more regions can be locked and not divided according to the user's manipulation information. Therefore, different segmentation requirements of users are met, and the purpose of accurate segmentation is achieved.
Step 207, a third input from the user is received.
After the segmentation of the picture to be stitched is completed, the segmentation process may be temporarily suspended, and the electronic device continues to monitor the operation behavior of the user, so as to receive a third input of the user, where the third input is used to invoke a display control similar to that in step 203, so as to display the preset stitching template.
And step 208, responding to the third input, and displaying at least two preset splicing templates.
The method for displaying at least two preset splicing templates on the display interface is similar to the method for displaying the preset segmentation templates in step 203, and is not described herein again.
It should be noted that, in the embodiment of the present invention, in the case that there is no third input, a default one of the preset stitching templates may be displayed. That is to say, after the segmentation of the picture to be stitched is completed, the default preset stitching template can be directly displayed for the user without being determined by the selection of the user. In addition, the target division template and the target mosaic template may be determined simultaneously based on the second input, and may be used for picture division and picture mosaic, respectively. It is understood that the preset splicing template displayed in step 208 and the preset splitting template displayed in step 206 may be the same or different, and the embodiment of the present invention is not limited thereto. Regarding the style of the target stitching template, reference may also be made to the schematic of fig. 7.
Step 209, a fourth input from the user is received.
For at least two preset splicing templates provided, a fourth input of the user needs to be received, and the fourth input is used for selecting and determining one template from the preset splicing templates to use. Therefore, which preset splicing template is selected by the user can be judged according to the fourth input. Similarly to the first input and the second input, the third input and the fourth input may include at least one of the following input information: gesture information, eye control information, fingerprint information, and voice information. Specifically, the type of the received fourth input may refer to the first input provided in step 202, and is not described herein again.
And 210, responding to the fourth input, and determining a target splicing template from the preset splicing templates.
And in the face of a plurality of different preset splicing templates, the fourth input is selection information aiming at the splicing templates. Regarding the style of the target splicing template, similar to fig. 6, the style may also be different preset splicing templates such as two-grid, three-grid, four-grid, six-grid, nine-grid, and the like, and when the fourth input is a click operation of the user for the nine-grid, the nine-grid template is used as the target splicing template for splicing the sub-pictures to be spliced. For example, after the user selects the squared figure template as the target splicing template, the foregoing sub-pictures to be spliced may be acquired for splicing. It is to be understood that the target mosaic template may be any one of the preset mosaic templates, depending on the selection information of the corresponding preset mosaic template represented by the fourth input.
And step 211, determining the number of the splicing areas in the target splicing template.
Each preset splicing template has a unique shape composition and splicing area number. Taking the first template shown in fig. 6 as an example, it includes three splicing regions, a central heart-shaped region and irregular regions on both sides. Taking a squared figure template as an example, the squared figure template comprises nine splicing areas, and each splicing area is in a rectangular shape. Similar to the picture attribute information, a field such as a splicing area may be defined in the attribute of the preset splicing template, and a corresponding attribute value is written into the field, where the attribute value indicates the number of splicing areas in the preset splicing template. Since the target splicing template is one of the preset splicing templates, after the target splicing template is obtained, the number of splicing areas of the target splicing template can be known by reading the attribute information of the target splicing template.
And 212, acquiring the sub-pictures to be spliced, which have the same number as the splicing areas.
After the number of the splicing areas in the target splicing template is known, the number of the sub-pictures to be spliced, which is the same as the number of the splicing areas, needs to be obtained to ensure that no blank is left in the target splicing template. Similarly, taking a squared figure template as an example of the target splicing template, nine sub-pictures to be spliced need to be selected from the plurality of sub-pictures to be spliced obtained by the segmentation in the foregoing steps. For example, P1-1, P1-2, P1-3, P1-4 and P1-5 are selected from the sub-pictures to be spliced of the picture P1, and P2-1, P2-2, P2-3 and P2-4 are selected from the sub-pictures to be spliced of the picture P2. Specifically, the sub-pictures to be spliced are acquired by manually selecting the sub-pictures through modes such as gesture operation, eye control operation and voice operation based on self requirements by a user, and splicing requirements are accurately met. Of course, the image recognition and image classification algorithms in the artificial intelligence technology can also be used for self-recommendation, for example, people images, scenery, animals and the like are recommended, so that the intelligent degree is improved. Therefore, the embodiment of the invention does not limit the acquisition mode of the sub-pictures to be spliced.
Step 213, filling the sub-picture to be spliced in the splicing area to generate a target picture; and one splicing area corresponds to one sub-picture to be spliced.
And starting the picture splicing under the condition that the target splicing template and the sub-picture to be spliced are complete. Following the example of step 212, referring to fig. 4, the sub-pictures P1-1, P1-2, P1-3, P1-4, P1-5, P2-1, P2-2, P2-3, and P2-4 to be stitched are filled in the respective regions of the nine-grid stitching template to obtain target pictures, each region is filled with one sub-picture to be stitched, and the target pictures have the effect of the nine-grid photo wall as shown in fig. 4. After the target picture is obtained, the target picture can be stored, sent or shared to application programs such as WeChat, QQ, mailbox, microblog and the like according to gesture operation, eye control operation and the like of a user to finish social display and communication. It should be understood that the squared figure is used as an example for illustration in the embodiment of the present invention and is not considered to be a limitation to the implementation method of the present invention, and a person skilled in the art may also perform segmentation and concatenation with reference to templates of other patterns given above to obtain a unique target picture.
In addition, in the embodiment of the invention, because the sub-picture to be spliced obtained after the segmentation is the local picture element of the original sub-picture to be spliced, the stylized filter of decoloration, black and white, nostalgic, old picture, beauty and the like can be called to stylize the sub-picture to be spliced, thereby realizing the local processing of the target picture before splicing, achieving the purpose of local image correction and avoiding influencing all pictures.
In the embodiment of the invention, when at least two pictures to be spliced are obtained, the target segmentation template is obtained, the pictures can be segmented by the target segmentation template to obtain the sub-pictures to be spliced, and when splicing is needed, the sub-pictures to be spliced are spliced according to the target splicing template to obtain the target pictures. Therefore, the operation steps of segmenting, splicing and combining different pictures can be simplified, the operation complexity is reduced, and the operation convenience is improved. And various different types of input information from users can be received to complete the selection of the template and even the selection of the picture, so that the picture splicing requirement of the users can be accurately met, the operation pleasure is enhanced, and the human-computer interaction experience is also improved.
EXAMPLE III
Referring to fig. 8, there is shown a block diagram of an electronic device comprising:
the first obtaining module 301 is configured to obtain at least two pictures to be stitched.
A second obtaining module 302, configured to obtain a target segmentation template.
And the dividing module 303 is configured to divide the to-be-spliced picture into to-be-spliced sub-pictures according to the target division template.
And a third obtaining module 304, configured to obtain the target stitching template.
And a splicing module 305, configured to splice the sub-pictures to be spliced into the target picture based on the target splicing template.
For the embodiment of the electronic device, since it is basically similar to the method embodiment, the description is simple, and relevant points and advantages can be obtained by referring to part of the description of the method embodiment.
Example four
Referring to fig. 9, there is shown a block diagram of another electronic device comprising:
the first obtaining module 401 is configured to obtain at least two pictures to be stitched.
A second obtaining module 402, configured to obtain the target segmentation template.
Optionally, the second obtaining module 402 may include:
the first receiving sub-module 4021 is configured to receive a first input from a user;
a first response sub-module 4022, configured to display at least two preset segmentation templates in response to the first input;
a second receiving sub-module 4023, configured to receive a second input from the user;
a second response sub-module 4024, configured to determine a target segmentation template from the preset segmentation templates in response to the second input.
Optionally, the first input or the second input includes at least one of the following input information: gesture information, eye control information, fingerprint information, and voice information.
And a dividing module 403, configured to divide the picture to be stitched into sub-pictures to be stitched according to the target division template.
And a third obtaining module 404, configured to obtain the target stitching template.
Optionally, the third obtaining module 404 may include:
a third receiving sub-module 4041, configured to receive a third input from the user;
a third response sub-module 4042, configured to respond to the third input and display at least two preset stitching templates;
a fourth receiving sub-module 4043, configured to receive a fourth input from the user;
a fourth response sub-module 4044, configured to determine a target stitching template from the preset stitching templates in response to the fourth input.
Similarly to the first input and the second input, the third input and the fourth input may include at least one of the following: gesture information, eye control information, fingerprint information, and voice information.
And the splicing module 405 is configured to splice the sub-pictures to be spliced into the target picture based on the target splicing template.
Optionally, the splicing module 405 may include:
the determining submodule 4051 is configured to determine the number of the splicing areas in the target splicing template;
an obtaining sub-module 4052, configured to obtain the sub-pictures to be stitched, which are the same as the number of the stitching regions;
a generating sub-module 4053, configured to fill the sub-picture to be stitched in the stitching region, so as to generate a target picture; and one splicing area corresponds to one sub-picture to be spliced.
For the embodiment of the electronic device, since it is basically similar to the method embodiment, the description is simple, and relevant points and advantages can be obtained by referring to part of the description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including:
a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the picture processing method as provided by the preceding embodiments. The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
FIG. 10 is a diagram illustrating a hardware configuration of an electronic device implementing various embodiments of the invention;
the electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 510 is configured to obtain at least two pictures to be stitched;
acquiring a target segmentation template;
according to the target segmentation template, segmenting the picture to be spliced into sub-pictures to be spliced;
acquiring a target splicing template;
and splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
In the embodiment of the invention, when at least two pictures to be spliced are obtained, the target segmentation template is obtained, the pictures can be segmented by the target segmentation template to obtain the sub-pictures to be spliced, and when splicing is needed, the sub-pictures to be spliced are spliced according to the target splicing template to obtain the target pictures. Therefore, the operation steps of segmenting, splicing and combining different pictures can be simplified, the operation complexity is reduced, and the operation convenience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 606 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 6071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 10, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A picture processing method is applied to electronic equipment, and is characterized by comprising the following steps:
acquiring at least two pictures to be spliced;
acquiring a target segmentation template;
according to the target segmentation template, segmenting the picture to be spliced into sub-pictures to be spliced;
acquiring a target splicing template;
and splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
2. The method of claim 1, wherein the obtaining a target segmentation template comprises:
receiving a first input of a user;
displaying at least two preset segmentation templates in response to the first input;
receiving a second input of the user;
in response to the second input, a target segmentation template is determined from the preset segmentation templates.
3. The method according to claim 1, wherein the splicing the sub-picture to be spliced into the target picture based on the target splicing template comprises:
determining the number of splicing areas in the target splicing template;
acquiring the sub-pictures to be spliced, the number of which is the same as that of the splicing areas;
filling the sub-picture to be spliced in the splicing area to generate a target picture;
and one splicing area corresponds to one sub-picture to be spliced.
4. The method of claim 1, wherein the obtaining the target stitching template comprises:
receiving a third input of the user;
displaying at least two preset splicing templates in response to the third input;
receiving a fourth input from the user;
in response to the fourth input, determining a target stitching template from the preset stitching templates.
5. The method of claim 2, wherein the first input or the second input comprises at least one of the following input information:
gesture information, eye control information, fingerprint information, and voice information.
6. An electronic device, characterized in that the electronic device comprises:
the first acquisition module is used for acquiring at least two pictures to be spliced;
the second acquisition module is used for acquiring a target segmentation template;
the segmentation module is used for segmenting the picture to be spliced into sub-pictures to be spliced according to the target segmentation template;
the third acquisition module is used for acquiring a target splicing template;
and the splicing module is used for splicing the sub-pictures to be spliced into the target picture based on the target splicing template.
7. The electronic device of claim 6, wherein the second obtaining module comprises:
the first receiving submodule is used for receiving a first input of a user;
the first response submodule is used for responding to the first input and displaying at least two preset segmentation templates;
the second receiving submodule is used for receiving a second input of the user;
and the second response submodule is used for responding to the second input and determining a target segmentation template from the preset segmentation templates.
8. The electronic device of claim 6, wherein the stitching module comprises:
the determining submodule is used for determining the number of the splicing areas in the target splicing template;
the obtaining sub-module is used for obtaining the sub-pictures to be spliced, the number of which is the same as that of the splicing areas;
the generation sub-module is used for filling the sub-picture to be spliced in the splicing area to generate a target picture;
and one splicing area corresponds to one sub-picture to be spliced.
9. The electronic device of claim 6, wherein the third obtaining module comprises:
the third receiving submodule is used for receiving a third input of the user;
the third response submodule is used for responding to the third input and displaying at least two preset splicing templates;
the fourth receiving submodule is used for receiving a fourth input of the user;
and the fourth response submodule is used for responding to the fourth input and determining a target splicing template from the preset splicing templates.
10. The electronic device of claim 7, wherein the first input or the second input comprises at least one of the following input information:
gesture information, eye control information, fingerprint information, and voice information.
CN202010247507.2A 2020-03-31 2020-03-31 Picture processing method and electronic equipment Pending CN111461985A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010247507.2A CN111461985A (en) 2020-03-31 2020-03-31 Picture processing method and electronic equipment
PCT/CN2021/082750 WO2021197165A1 (en) 2020-03-31 2021-03-24 Picture processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010247507.2A CN111461985A (en) 2020-03-31 2020-03-31 Picture processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111461985A true CN111461985A (en) 2020-07-28

Family

ID=71685782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010247507.2A Pending CN111461985A (en) 2020-03-31 2020-03-31 Picture processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111461985A (en)
WO (1) WO2021197165A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162805A (en) * 2020-09-23 2021-01-01 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112667835A (en) * 2020-12-23 2021-04-16 北京达佳互联信息技术有限公司 Work processing method and device, electronic equipment and storage medium
WO2021197165A1 (en) * 2020-03-31 2021-10-07 维沃移动通信有限公司 Picture processing method and electronic device
CN113744401A (en) * 2021-09-09 2021-12-03 网易(杭州)网络有限公司 Terrain splicing method and device, electronic equipment and storage medium
CN115564656A (en) * 2022-11-11 2023-01-03 成都智元汇信息技术股份有限公司 Multi-graph merging and graph recognizing method and device based on scheduling

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504651A (en) * 2015-01-22 2015-04-08 网易(杭州)网络有限公司 Preview generation method and device
CN104867105A (en) * 2015-05-30 2015-08-26 北京金山安全软件有限公司 Picture processing method and device
WO2015165222A1 (en) * 2014-04-29 2015-11-05 华为技术有限公司 Method and device for acquiring panoramic image
CN105225197A (en) * 2015-09-14 2016-01-06 北京金山安全软件有限公司 Picture clipping method and device
CN106485689A (en) * 2016-10-10 2017-03-08 努比亚技术有限公司 A kind of image processing method and device
CN107767340A (en) * 2017-10-26 2018-03-06 厦门理工学院 The synthesis preparation method of electronic photo
US20180096452A1 (en) * 2016-09-30 2018-04-05 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
CN108986026A (en) * 2018-06-25 2018-12-11 努比亚技术有限公司 A kind of picture joining method, terminal and computer readable storage medium
CN110572369A (en) * 2019-08-14 2019-12-13 平安科技(深圳)有限公司 picture verification method and device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7006709B2 (en) * 2002-06-15 2006-02-28 Microsoft Corporation System and method deghosting mosaics using multiperspective plane sweep
CN104657716A (en) * 2015-02-12 2015-05-27 杭州秋樽网络科技有限公司 SNS multi-image fusion method
CN105827987B (en) * 2016-05-27 2019-07-26 维沃移动通信有限公司 A kind of picture shooting method and mobile terminal
CN109919146A (en) * 2019-02-02 2019-06-21 上海兑观信息科技技术有限公司 Picture character recognition methods, device and platform
CN111461985A (en) * 2020-03-31 2020-07-28 维沃移动通信有限公司 Picture processing method and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015165222A1 (en) * 2014-04-29 2015-11-05 华为技术有限公司 Method and device for acquiring panoramic image
CN105096283A (en) * 2014-04-29 2015-11-25 华为技术有限公司 Panoramic image acquisition method and device
CN104504651A (en) * 2015-01-22 2015-04-08 网易(杭州)网络有限公司 Preview generation method and device
CN104867105A (en) * 2015-05-30 2015-08-26 北京金山安全软件有限公司 Picture processing method and device
CN105225197A (en) * 2015-09-14 2016-01-06 北京金山安全软件有限公司 Picture clipping method and device
US20180096452A1 (en) * 2016-09-30 2018-04-05 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
CN106485689A (en) * 2016-10-10 2017-03-08 努比亚技术有限公司 A kind of image processing method and device
CN107767340A (en) * 2017-10-26 2018-03-06 厦门理工学院 The synthesis preparation method of electronic photo
CN108986026A (en) * 2018-06-25 2018-12-11 努比亚技术有限公司 A kind of picture joining method, terminal and computer readable storage medium
CN110572369A (en) * 2019-08-14 2019-12-13 平安科技(深圳)有限公司 picture verification method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
风溪草堂: "刷爆朋友圈的九宫格拼图,你会吗?[附详细图文教程]", Retrieved from the Internet <URL:https://zhuanlan.zhihu.com> *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021197165A1 (en) * 2020-03-31 2021-10-07 维沃移动通信有限公司 Picture processing method and electronic device
CN112162805A (en) * 2020-09-23 2021-01-01 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112162805B (en) * 2020-09-23 2023-05-19 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112667835A (en) * 2020-12-23 2021-04-16 北京达佳互联信息技术有限公司 Work processing method and device, electronic equipment and storage medium
CN113744401A (en) * 2021-09-09 2021-12-03 网易(杭州)网络有限公司 Terrain splicing method and device, electronic equipment and storage medium
CN115564656A (en) * 2022-11-11 2023-01-03 成都智元汇信息技术股份有限公司 Multi-graph merging and graph recognizing method and device based on scheduling

Also Published As

Publication number Publication date
WO2021197165A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
CN108495029B (en) Photographing method and mobile terminal
CN111461985A (en) Picture processing method and electronic equipment
WO2021104321A1 (en) Image display method and electronic device
WO2020042890A1 (en) Video processing method, terminal, and computer readable storage medium
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN111010610B (en) Video screenshot method and electronic equipment
WO2021036531A1 (en) Screenshot method and terminal device
WO2020182035A1 (en) Image processing method and terminal device
CN108646960B (en) File processing method and flexible screen terminal
CN110990172A (en) Application sharing method, first electronic device and computer-readable storage medium
CN110213729B (en) Message sending method and terminal
CN109471586B (en) Keycap color matching method and device and terminal equipment
CN109448069B (en) Template generation method and mobile terminal
CN111127595A (en) Image processing method and electronic device
CN109144393B (en) Image display method and mobile terminal
CN111176526B (en) Picture display method and electronic equipment
CN110913261A (en) Multimedia file generation method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110531903B (en) Screen display method, terminal device and storage medium
CN109491964B (en) File sharing method and terminal
CN109542307B (en) Image processing method, device and computer readable storage medium
CN109542321B (en) Control method and device for screen display content
CN109166164B (en) Expression picture generation method and terminal
CN109117037B (en) Image processing method and terminal equipment
WO2021185142A1 (en) Image processing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination