CN108460817B - Jigsaw puzzle method and mobile terminal - Google Patents

Jigsaw puzzle method and mobile terminal Download PDF

Info

Publication number
CN108460817B
CN108460817B CN201810064803.1A CN201810064803A CN108460817B CN 108460817 B CN108460817 B CN 108460817B CN 201810064803 A CN201810064803 A CN 201810064803A CN 108460817 B CN108460817 B CN 108460817B
Authority
CN
China
Prior art keywords
target
image
image area
picture
jigsaw
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810064803.1A
Other languages
Chinese (zh)
Other versions
CN108460817A (en
Inventor
郑达川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810064803.1A priority Critical patent/CN108460817B/en
Publication of CN108460817A publication Critical patent/CN108460817A/en
Application granted granted Critical
Publication of CN108460817B publication Critical patent/CN108460817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Abstract

The invention provides a jigsaw method and a mobile terminal, wherein the method comprises the following steps: identifying at least one target object in the target picture; respectively determining a first image area where each target object in the at least one target object in the target picture is located to obtain at least one first image area; extracting images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the jigsaw candidate set; and performing jigsaw puzzle according to the image of the first target image area in the jigsaw puzzle candidate object set. The jigsaw method provided by the invention does not need the user to carry out cutting operation on the target picture in advance, thereby having the advantage of simple jigsaw operation.

Description

Jigsaw puzzle method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a jigsaw method and a mobile terminal.
Background
At present, with the continuous progress of the camera shooting technology of the mobile terminal, people generally utilize the mobile terminal to take a picture, and often need to further process the taken picture so as to share the picture to a social network site, for example: and (5) processing the jigsaw puzzle. In the prior art, a picture to be pieced and a jigsaw template are generally determined, then the picture to be pieced is filled into the jigsaw template, and the position of the picture to be pieced in the jigsaw template is correspondingly adjusted, so as to finally generate a jigsaw. However, when the user only needs to perform jigsaw in the target image area of the picture to be jigsaw, the user often needs to perform appropriate cutting operation on the picture to be jigsaw in advance, and then uses a jigsaw template in the jigsaw software to perform jigsaw on the cut picture.
Therefore, the problem that the jigsaw operation is complicated exists in the jigsaw method adopted by the existing mobile terminal.
Disclosure of Invention
The embodiment of the invention provides a jigsaw method and a mobile terminal, which aim to solve the problem that the jigsaw operation is complicated in the jigsaw method adopted by the existing mobile terminal.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a jigsaw method, applied to a mobile terminal, where the method includes:
identifying at least one target object in the target picture;
respectively determining a first image area where each target object in the at least one target object in the target picture is located to obtain at least one first image area;
extracting images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the jigsaw candidate set;
and performing jigsaw puzzle according to the image of the first target image area in the jigsaw puzzle candidate object set.
In a second aspect, an embodiment of the present invention provides a mobile terminal, including:
the first identification module is used for identifying at least one target object in the target picture;
a first determining module, configured to determine a first image region where each target object in the at least one target object in the target picture is located, respectively, to obtain at least one first image region;
the first adding module is used for extracting the images of all or part of the at least one first image area and adding the images of all or part of the at least one first image area to the jigsaw candidate object set;
and the picture splicing module is used for carrying out picture splicing according to the image of the first target image area in the picture splicing candidate object set.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps in the above-mentioned jigsaw puzzle method.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned jigsaw puzzle method.
In the embodiment of the present invention, the mobile terminal may identify at least one target object in a target picture, and respectively determine a first image region where each target object in the at least one target object in the target picture is located, to obtain at least one first image region, so as to extract an image of all or a part of the first image region in the at least one first image region, and perform jigsaw puzzle by using the extracted image of the image region as a candidate material for jigsaw puzzle. Therefore, the picture splicing method does not need the user to perform cutting operation on the target picture in advance, and has the advantage of simple picture splicing operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a jigsaw method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of extracting an image of a partial first image region in a target picture to a tile object candidate bar according to an embodiment of the present invention;
FIG. 3 is a flowchart of another jigsaw method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a display result of searching for a picture matching an image of a second target image area in a candidate bar of a tile object according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a jigsaw method provided in an embodiment of the present invention, which is applied to a mobile terminal, and as shown in fig. 1, the method includes the following steps:
step 101, identifying at least one target object in the target picture.
In this embodiment, the target picture may be a photo selected from a local album or a cloud album of the mobile terminal, or a photo downloaded from the internet or a photographed photo. One or more objects may be included in the target picture, for example: humans, animals, food or buildings, etc.
In this step, at least one target object in the target picture may be identified, for example: identifying one or more persons in the target picture, or identifying persons and animals in the target picture, or identifying all objects in the target picture, and the like. Specifically, the target picture may be identified by an image feature identification technology or a machine learning algorithm, so as to identify at least one target object in the target picture, where the type of the target object may be an object type preset by a system, or an object type customized by a user.
In this way, in this step, at least one target object in the target picture can be determined by identifying the target picture, so that at least one first image region in the target picture can be determined according to the at least one target object in the target picture, and further, an image of all or part of the at least one first image region can be extracted as a jigsaw candidate material without requiring a user to perform a cropping operation on the target picture in advance.
Step 102, determining a first image area where each target object in the at least one target object in the target picture is located, respectively, to obtain at least one first image area.
In this step, after at least one target object in the target picture is identified, a first image region where each target object in the at least one target object in the target picture is located may be determined, respectively, to obtain at least one first image region. Specifically, the contour of each of the at least one target object in the target picture may be determined by using an image segmentation technique, such as edge segmentation or region segmentation, and then a first image region in which each target object in the target picture is located is determined according to the contour of each target object, where the first image region in which a certain target object is located may be an image region included in the contour of the target object or a minimum rectangular image region including the contour of the target object.
In this way, in this step, by respectively determining the first image region where each of the at least one target object in the target picture is located, at least one first image region may be obtained, so that, according to the at least one first image region determined in the target picture, an image of all or part of the at least one first image region may be extracted therefrom as a jigsaw puzzle candidate material, without requiring a user to perform a cropping operation on the target picture in advance.
Step 103, extracting the images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the puzzle candidate set.
In this step, the extracting of the image of all or part of the at least one first image region may be directly extracting the image of all or part of the at least one first image region from the target picture, for example: according to a preset extraction number, when the number of the at least one first image area is smaller than or equal to the preset extraction number, extracting the images of all the at least one first image area from the target picture, and when the number of the at least one first image area is larger than the preset extraction number, extracting the images of part of the at least one first image area from the target picture, wherein the number of the extracted part of the first image area is equal to the preset extraction number.
The extracting of the image of all or part of the at least one first image area may also be extracting, from the target picture, an image of all or part of the at least one first image area selected by the user, for example: as shown in fig. 2, when four image regions a1, a2, A3, and a4 are determined from a target picture 201 displayed in a mosaic object selection interface 200, a user can select any one or more image regions from the four image regions a1, a2, A3, and a4, so that the mobile terminal extracts an image of the image region selected by the user from the target picture 201.
In addition, the extracted images of all or part of the at least one first image area may also be added to the set of puzzle candidates, so that the images of all or part of the at least one first image area are used as puzzle candidates, thereby facilitating the user to select a target puzzle candidate from the set of puzzle candidates for puzzle making a puzzle. For example: as shown in fig. 2, the images of the three image areas a1, A3, and a4 in the target picture 201 that are selected by the user may be extracted, and the images of the three image areas a1, A3, and a4 are added to the tile object candidate bar 202 below the tile object selection interface 200, so that the user can select a tile image of a desired image area from the tile object candidate bar 202 for the tile making process, where the tile object candidate set may be understood as a set of images of the image areas added in the tile object candidate bar 202.
In this way, in this step, by extracting the images of all or part of the first image area in the at least one first image area and adding the images of all or part of the first image area in the at least one first image area to the puzzle candidate object set, it is possible to provide a user with more targeted and flexible puzzle candidate materials, and it is not necessary for the user to perform a cropping operation on the target picture, and further, the method has an advantage of being simple in operation.
And 104, performing jigsaw according to the image of the first target image area in the jigsaw candidate object set.
In this step, the puzzle may be performed according to the image of the first target image area in the set of puzzle candidates, specifically, the image of all the image areas in the set of puzzle candidates may be used as the image of the first target image area, and the image of the first target image area may be used for the puzzle, or the image of the image area selected by the user from the set of puzzle candidates may be used as the image of the first target image area, and the image of the first target image area may be used for the puzzle.
It should be noted that, for any target picture, the manner described in this embodiment may be adopted to extract the image area where the target object is located, and add the image of the image area where the target object is located to the puzzle candidate object set as a puzzle candidate object, so that the user can select a puzzle.
In this way, in this step, the purpose of performing jigsaw by using the image in the partial region in the target picture can be achieved by performing jigsaw according to the image in the first target image region in the jigsaw candidate set, and the user can arbitrarily select the image in the desired image region from the jigsaw candidate set to perform jigsaw, so that the method has better flexibility.
In this embodiment of the present invention, the mobile terminal may be any device having a storage medium, for example: terminal devices such as a Computer (Computer), a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), or a wearable Device (wearable Device).
In the jigsaw method in this embodiment, the mobile terminal may identify at least one target object in a target picture, and respectively determine a first image region in which each target object in the at least one target object in the target picture is located, to obtain at least one first image region, so that an image of all or part of the first image regions in the at least one first image region may be extracted, and the extracted image of the image region is used as a jigsaw candidate material for jigsaw puzzle splicing. Therefore, the picture splicing method does not need the user to perform cutting operation on the target picture in advance, and has the advantage of simple picture splicing operation.
Referring to fig. 3, fig. 3 is a flowchart of another puzzle method provided by the embodiment of the present invention, which is applied to a mobile terminal, and in this embodiment, on the basis of the embodiment shown in fig. 1, a step of identifying at least one target object in a target picture is refined, so that an implementation manner of identifying at least one target object in the target picture is more specific. As shown in fig. 3, the method comprises the steps of:
step 301, at least one category label of the target picture is obtained.
In this embodiment, after determining a target picture, at least one category tag of the target picture may be obtained first, where the category tag may be understood as a tag of a category to which an object included in the target picture belongs, for example: if the target picture includes a person, the tag of the category to which the person belongs is "person", so at least one category tag of the target picture may be "person".
The obtaining of the at least one category tag of the target picture may include two different embodiments, where a first embodiment may be that the mobile terminal adds the at least one category tag to the target picture in advance according to at least one object included in the target picture, so that when it is determined that the target picture prepares for a puzzle, the at least one category tag of the target picture may be directly obtained; the second implementation manner may be that after the target picture is determined, at least one object included in the target picture is identified, and then a tag of a category to which each object in the at least one object belongs is determined, so as to obtain at least one category tag of the target picture.
In this way, in this step, by obtaining at least one category tag of the target picture, a specific category tag included in the target picture can be displayed to a user, and the user can select one or more category tags as target category tags according to the user's own requirements, so that the mobile terminal can identify a target object matched with the target category tag from the target picture.
Optionally, before step 301, the method further includes:
identifying the category of each object in the target picture;
determining at least one category label of the target picture according to the label corresponding to the category of each object in the categories of the objects;
storing at least one category label of the target picture;
the step of obtaining at least one category label of the target picture includes:
and acquiring at least one category label of the stored target picture.
In this embodiment, the mobile terminal may determine at least one category label of the target picture in advance, so that the mobile terminal may directly obtain the at least one category label of the target picture when performing a picture puzzle, thereby improving efficiency of the picture puzzle. Specifically, the category of each object in the target picture may be identified first, and the identifying the category of each object in the target picture may be performed by using a machine learning algorithm, for example: and a Neural Network (NNs) or a Support Vector Machine (SVM) or the like, for identifying the target picture to determine each object and the category of each object included in the target picture.
The determining at least one category label of the target picture according to the label corresponding to each category of the objects in the categories of the objects may be that the mobile terminal establishes a correspondence between the categories and the labels in advance, so that the label corresponding to each category of the objects in the categories of the objects can be determined according to the correspondence, and further, the at least one category label of the target picture is determined.
After determining the at least one category tag of the target picture, the at least one category tag of the target picture may also be stored, and specifically, the target picture and the at least one category tag of the target picture may be stored in a database, a folder, or the like in an associated manner. In this way, when the target picture puzzle is utilized, at least one category tag of the target picture can be obtained from the storage location of the category tag information of the target picture.
In this embodiment, by identifying the category of each object in the target picture in advance, and determining at least one category tag of the target picture according to the tag corresponding to the category of each object in the categories of each object, the mobile terminal can directly obtain the stored at least one category tag of the target picture when preparing a jigsaw puzzle, and does not need to obtain the stored at least one category tag by identifying and determining the category tag of the target picture when the jigsaw puzzle is made, so that the efficiency of the jigsaw puzzle can be improved.
Step 302, selecting a target category label from the at least one category label.
In this step, after obtaining the at least one category label of the target picture, a target category label may be selected from the at least one category label, specifically, a category label selected by a user from the at least one category label may be used as a target category label, where the user may select any one or more category labels from the at least one category label as the target category label. For example: when at least one category label of the acquired target picture comprises a person, an animal and a building, if the user selects the "person" category label, the "person" category label is used as the target category label, or if the user selects the two category labels of the "person" and the "animal", the two category labels of the "person" and the "animal" are used as the target category labels.
In this way, in this step, by selecting a target category tag from the at least one category tag, a target category tag in the target picture may be determined, so that the mobile terminal identifies a target object matching the target category tag from the target picture. In addition, the method can enable the user to freely select the target class label from the at least one class label according to the self requirement, thereby having greater flexibility.
Step 303, identifying at least one target object in the target picture, which is matched with the target category label.
In this step, after determining the target category tag, at least one target object in the target picture that matches the target category tag may be identified, and specifically, a target object category may be determined according to the target category tag, and then at least one target object that belongs to the target object category in the target picture is identified. For example: if the target type tag is a person, the target object type can be determined to be the person type, and then at least one person object belonging to the person type in the target picture can be identified.
In this way, in this step, at least one target object in the target picture that matches the target category tag may be identified according to the target category tag, so that at least one first image region in the target picture may be determined according to the at least one target object in the target picture, and further, an image of all or part of the at least one first image region may be extracted as a jigsaw candidate material, without requiring a user to perform a cropping operation on the target picture in advance.
Step 304, determining a first image area where each target object in the at least one target object in the target picture is located, respectively, to obtain at least one first image area.
The specific implementation of this step can refer to the implementation of step 102 in the method embodiment shown in fig. 1, and the same beneficial effects can be achieved, and for avoiding repetition, details are not described here.
Step 305, extracting the images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the puzzle candidate set.
The specific implementation of this step can refer to the implementation of step 103 in the method embodiment shown in fig. 1, and the same beneficial effects can be achieved, and for avoiding repetition, details are not described here.
And step 306, performing jigsaw according to the image of the first target image area in the jigsaw candidate object set.
The specific implementation of this step can refer to the implementation of step 104 in the method embodiment shown in fig. 1, and the same beneficial effects can be achieved, and for avoiding repetition, details are not described here.
Optionally, before step 306, the method further includes:
if the selection operation on the target picture is detected, identifying a second image area selected by the selection operation on the target picture;
and extracting the image of the second image area, and adding the image of the second image area to the jigsaw candidate set.
In this embodiment, the user may select any image area on the target picture to use the image of the selected image area as the candidate for the puzzle. Specifically, if a selection operation on a target picture is detected, identifying a second image area selected by the selection operation on the target picture, where the selection operation may be a sliding operation, a frame selection operation, or another selection operation, for example: the user may perform a slide operation on the target picture and determine a closed image area through the slide operation, or the user may select an image area on the target picture through a selection tool.
When the selection operation is a sliding operation, a sliding track of the sliding operation may be identified, and whether the sliding track meets a preset condition is determined, for example: firstly, judging whether the sliding track forms a closed area, if the sliding track forms the closed area, determining that the sliding track meets a preset condition, if the sliding track does not form the closed area, further judging whether the distance between a starting point and an end point of the sliding track is within a preset distance range, if the distance between the starting point and the end point of the sliding track is within the preset distance range, determining that the sliding track meets the preset condition, and if the distance between the starting point and the end point of the sliding track is not within the preset distance range, determining that the sliding track does not meet the preset condition.
If it is determined that the sliding track meets the preset condition, determining a second image region in the target picture according to the sliding track, specifically, when the sliding track forms a closed region, determining the closed region formed by the sliding track in the target picture as the second image region; when the sliding track does not form a closed area, but the distance between the starting point and the end point of the sliding track is within the preset distance range, filling a track line between the starting point and the end point of the sliding track, so that the filled sliding track forms a closed area, and determining the closed area as the second image area.
Finally, the image of the second image area can be extracted from the target picture, and the image of the second image area is added to the set of jigsaw puzzle candidates, so that the image of the second image area is used as a jigsaw puzzle candidate, and therefore, a user can conveniently select a target jigsaw puzzle candidate from the set of jigsaw puzzle candidates to perform jigsaw puzzle.
It should be noted that, the user may determine a plurality of interested image areas in the target picture by performing a plurality of sliding operations, so that the mobile terminal extracts and adds the images of the plurality of image areas determined by the user to the puzzle candidate set.
In this embodiment, when there is an image area in the target picture, which is interested by another user, in addition to the at least one first image area, the user may further determine a second image area in the target picture by performing a selection operation on the target picture, and the mobile terminal extracts and adds an image of the second image area to the puzzle candidate object set for the user to select. Therefore, the implementation mode can provide a more flexible picture splicing material selecting method and richer picture splicing alternative materials for the user, and the interest of the picture splicing method is further increased.
Of course, the same can be applied to the embodiment shown in fig. 1 and the same advantageous effects can be achieved.
Optionally, after step 305 and before step 306, the method further includes:
if the preset touch operation aiming at the image of the second target image area in the jigsaw candidate object set is detected, searching a picture matched with the image of the second target image area;
adding a picture selected from the pictures matching the image of the second target image region to the set of tile candidates.
In this embodiment, an image of an image area in the set of tile candidates may be used as a source picture to search for a picture related to the image area, so that a user may select a desired picture from the pictures and add the selected picture to the set of tile candidates. Specifically, if a preset touch operation for an image of a second target image area in the puzzle candidate object set is detected, a picture matched with the image of the second target image area may be searched, where the preset touch operation may be a preset click search or a long press search, and the search for a picture matched with the image of the second target image area may be a search for a picture with a similarity to the image of the second target image area exceeding a minimum preset threshold from a local album, a cloud album, or the internet.
Then, a picture selected from the pictures matching with the image of the second target image region may be added to the set of puzzle candidates, and specifically, a picture selected by the user from the pictures matching with the image of the second target image region may be determined according to the selection operation of the user, and the selected picture may be added to the set of puzzle candidates. For example: as shown in fig. 2, if a selection search operation for an image of an image area a4 in the tile object candidate bar 202 is detected, a picture matching the image of the image area a4 is searched, wherein the set of tile object candidates can be understood as a set of images of the image area added in the tile object candidate bar 202, as shown in fig. 4, if three pictures matching the image of the image area a4 are searched, a jump can be made to the search result display interface 400, and the three pictures B1, B2 and B3 are displayed on the search result display interface 400, and if the first picture B1 of the three pictures B1, B2 and B3 is selected by the user, the picture B1 is added to the tile object candidate bar 202.
It should be noted that, after searching for the picture matched with the image of the second target image region, the searched picture may also be displayed on the search result display interface according to a preset order, for example: the searched pictures can be displayed on the search result display interface according to the sequence of storage time, the size of the picture file or the sequence of similarity of the pictures with the image of the second target image area.
In this way, in this embodiment, the user may also select an image of any image area from the set of puzzle candidates as a source picture, so that the mobile terminal searches for a picture matched with the source picture as a recommended picture, and thus the user may select a picture of interest from the recommended pictures to add to the set of puzzle candidates as a puzzle candidate material. Therefore, the embodiment can further enrich the alternative materials of the jigsaw puzzle, and improve the interest of the jigsaw puzzle method.
Of course, the same can be applied to the embodiment shown in fig. 1 and the same advantageous effects can be achieved.
In this embodiment, on the basis of the embodiment shown in fig. 1, the step of identifying at least one target object in the target picture is refined, so that an implementation manner of identifying at least one target object in the target picture is more specific. In addition, a plurality of optional embodiments are added to the embodiment shown in fig. 1, and these optional embodiments may be implemented in combination with each other or separately, and all the optional embodiments can achieve the technical effect that the user does not need to perform the clipping operation on the target picture in advance, so that the method has the advantage of simple jigsaw operation.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 5, the mobile terminal 500 includes:
a first identification module 501, configured to identify at least one target object in a target picture;
a first determining module 502, configured to determine a first image region where each target object in the at least one target object in the target picture is located, respectively, to obtain at least one first image region;
a first adding module 503, configured to extract images of all or part of the at least one first image region, and add the images of all or part of the at least one first image region to the tile candidate set;
a splicing module 504, configured to perform splicing according to the image of the first target image area in the splicing candidate set.
Optionally, as shown in fig. 6, the first identifying module 501 includes:
an obtaining unit 5011, configured to obtain at least one category label of a target picture;
a selecting unit 5012 for selecting a target category label from the at least one category label;
the identifying unit 5013 is configured to identify at least one target object in the target picture matching the target category tag.
Optionally, as shown in fig. 7, the mobile terminal 500 further includes:
a second identification module 505, configured to identify a category of each object in the target picture;
a second determining module 506, configured to determine at least one category tag of the target picture according to a tag corresponding to a category of each object in the categories of the objects;
a storage module 507, configured to store at least one category tag of the target picture;
the obtaining unit 5011 is configured to obtain at least one category label of the stored target picture.
Optionally, as shown in fig. 8, the mobile terminal 500 further includes:
a third identifying module 508, configured to identify, if a selection operation on a target picture is detected, a second image area selected by the selection operation on the target picture;
a second adding module 509, configured to extract an image of the second image area, and add the image of the second image area to the tile candidate set.
Optionally, as shown in fig. 9, the mobile terminal 500 further includes:
a searching module 510, configured to search, if a preset touch operation for an image of a second target image area in the puzzle candidate object set is detected, a picture matching the image of the second target image area;
a third adding module 511, configured to add a picture selected from the pictures matched with the image of the second target image region to the set of tile candidates.
The mobile terminal 500 is capable of implementing each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 3, and is not described herein again to avoid repetition. The mobile terminal 500 of the embodiment of the present invention may identify at least one target object in a target picture, and respectively determine a first image region where each target object in the at least one target object in the target picture is located, to obtain at least one first image region, so as to extract an image of all or a part of the first image region in the at least one first image region, and perform jigsaw puzzle by using the extracted image of the image region as a candidate material for jigsaw puzzle. Therefore, the picture splicing method does not need the user to perform cutting operation on the target picture in advance, and has the advantage of simple picture splicing operation.
Fig. 10 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and a power supply 1011. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 10 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 1010 is configured to identify at least one target object in the target picture;
respectively determining a first image area where each target object in the at least one target object in the target picture is located to obtain at least one first image area;
extracting images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the jigsaw candidate set;
and performing jigsaw puzzle according to the image of the first target image area in the jigsaw puzzle candidate object set.
Optionally, the processor 1010 is further configured to: acquiring at least one category label of a target picture;
selecting a target category label from the at least one category label;
identifying at least one target object in the target picture that matches the target category label.
Optionally, the processor 1010 is further configured to: identifying the category of each object in the target picture;
determining at least one category label of the target picture according to the label corresponding to the category of each object in the categories of the objects;
storing at least one category label of the target picture;
and acquiring at least one category label of the stored target picture.
Optionally, the processor 1010 is further configured to: if the selection operation on the target picture is detected, identifying a second image area selected by the selection operation on the target picture;
and extracting the image of the second image area, and adding the image of the second image area to the jigsaw candidate set.
Optionally, the processor 1010 is further configured to: if the preset touch operation aiming at the image of the second target image area in the jigsaw candidate object set is detected, searching a picture matched with the image of the second target image area;
adding a picture selected from the pictures matching the image of the second target image region to the set of tile candidates.
The mobile terminal 1000 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition. The mobile terminal 1000 according to the embodiment of the present invention can identify at least one target object in a target picture, and respectively determine a first image area in which each target object in the at least one target object in the target picture is located, so as to obtain at least one first image area, so that an image of all or a part of the first image area in the at least one first image area can be extracted, and the extracted image of the image area is used as a candidate material for jigsaw puzzle splicing. Therefore, the picture splicing method does not need the user to perform cutting operation on the target picture in advance, and has the advantage of simple picture splicing operation.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1001 may be used for receiving and sending signals during a message transmission or a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1010; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 1001 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 1002, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 1003 may convert audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as sound. Also, the audio output unit 1003 may also provide audio output related to a specific function performed by the mobile terminal 1000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1004 is used to receive an audio or video signal. The input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, the Graphics processor 10041 Processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1006. The image frames processed by the graphic processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002. The microphone 10042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1001 in case of a phone call mode.
The mobile terminal 1000 can also include at least one sensor 1005, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 10061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 10061 and/or the backlight when the mobile terminal 1000 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 1006 is used to display information input by the user or information provided to the user. The Display unit 1006 may include a Display panel 10061, and the Display panel 10061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 10071 (e.g., operations by a user on or near the touch panel 10071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 10071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1010, and receives and executes commands sent by the processor 1010. In addition, the touch panel 10071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 10071, the user input unit 1007 can include other input devices 10072. Specifically, the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 10071 can be overlaid on the display panel 10061, and when the touch panel 10071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 10061 according to the type of the touch event. Although in fig. 10, the touch panel 10071 and the display panel 10061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 1008 is an interface through which an external device is connected to the mobile terminal 1000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1008 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 1000 or may be used to transmit data between the mobile terminal 1000 and external devices.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1009 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1010 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 1009 and calling data stored in the memory 1009, thereby integrally monitoring the mobile terminal. Processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The mobile terminal 1000 may also include a power supply 1011 (e.g., a battery) for powering the various components, and the power supply 1011 may be logically coupled to the processor 1010 via a power management system that may be configured to manage charging, discharging, and power consumption.
In addition, the mobile terminal 1000 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 1010, a memory 1009, and a computer program stored in the memory 1009 and capable of running on the processor 1010, where the computer program is executed by the processor 1010 to implement each process of the above-mentioned tile splicing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the jigsaw puzzle method, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A jigsaw method applied to a mobile terminal is characterized in that the method comprises the following steps:
identifying at least one target object in the target picture;
respectively determining a first image area where each target object in the at least one target object in the target picture is located to obtain at least one first image area;
extracting images of all or part of the at least one first image area, and adding the images of all or part of the at least one first image area to the jigsaw candidate set;
performing jigsaw puzzle according to the image of the first target image area in the jigsaw candidate object set;
the performing the puzzle according to the image of the first target image area in the puzzle candidate set includes:
taking the images of all image areas in the jigsaw candidate object set as the images of the first target image area, and performing jigsaw by using the images of the first target image area; or taking the image of the partial image area in the jigsaw candidate set as the image of the first target image area, and performing jigsaw by using the image of the first target image area.
2. The method of claim 1, wherein the step of identifying at least one target object in the target picture comprises:
acquiring at least one category label of a target picture;
selecting a target category label from the at least one category label;
identifying at least one target object in the target picture that matches the target category label.
3. The method of claim 2, wherein the step of obtaining at least one category label of the target picture is preceded by the method further comprising:
identifying the category of each object in the target picture;
determining at least one category label of the target picture according to the label corresponding to the category of each object in the categories of the objects;
storing at least one category label of the target picture;
the step of obtaining at least one category label of the target picture includes:
and acquiring at least one category label of the stored target picture.
4. The method according to any of claims 1 to 3, wherein prior to the step of performing a mosaic from an image of a first target image area in the set of mosaic candidates, the method further comprises:
if the selection operation on the target picture is detected, identifying a second image area selected by the selection operation on the target picture;
and extracting the image of the second image area, and adding the image of the second image area to the jigsaw candidate set.
5. The method according to any one of claims 1 to 3, wherein after the step of extracting the image of all or part of the at least one first image region and adding the image of all or part of the at least one first image region to the set of puzzle candidates, and before the step of mosaicing according to the image of the first target image region in the set of puzzle candidates, the method further comprises:
if the preset touch operation aiming at the image of the second target image area in the jigsaw candidate object set is detected, searching a picture matched with the image of the second target image area;
adding a picture selected from the pictures matching the image of the second target image region to the set of tile candidates.
6. A mobile terminal, comprising:
the first identification module is used for identifying at least one target object in the target picture;
a first determining module, configured to determine a first image region where each target object in the at least one target object in the target picture is located, respectively, to obtain at least one first image region;
the first adding module is used for extracting the images of all or part of the at least one first image area and adding the images of all or part of the at least one first image area to the jigsaw candidate object set;
the picture splicing module is used for carrying out picture splicing according to the image of the first target image area in the picture splicing candidate object set;
the puzzle module is specifically configured to: taking the images of all image areas in the jigsaw candidate object set as the images of the first target image area, and performing jigsaw by using the images of the first target image area; or taking the image of the partial image area in the jigsaw candidate set as the image of the first target image area, and performing jigsaw by using the image of the first target image area.
7. The mobile terminal of claim 6, wherein the first identification module comprises:
the acquisition unit is used for acquiring at least one category label of the target picture;
a selecting unit, configured to select a target category label from the at least one category label;
and the identification unit is used for identifying at least one target object matched with the target class label in the target picture.
8. The mobile terminal of claim 7, wherein the mobile terminal further comprises:
the second identification module is used for identifying the category of each object in the target picture;
a second determining module, configured to determine at least one category tag of the target picture according to a tag corresponding to a category of each object in the categories of the objects;
the storage module is used for storing at least one category label of the target picture;
the acquisition unit is used for acquiring at least one category label of the stored target picture.
9. The mobile terminal according to any of claims 6 to 8, characterized in that the mobile terminal further comprises:
the third identification module is used for identifying a second image area selected by the selection operation on the target picture if the selection operation on the target picture is detected;
and the second adding module is used for extracting the image of the second image area and adding the image of the second image area to the jigsaw candidate object set.
10. The mobile terminal according to any of claims 6 to 8, characterized in that the mobile terminal further comprises:
the searching module is used for searching a picture matched with the image of the second target image area if the preset touch operation aiming at the image of the second target image area in the jigsaw candidate object set is detected;
a third adding module, configured to add a picture selected from the pictures matched with the image of the second target image area to the set of puzzle candidates.
11. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps in the puzzle method according to any one of claims 1 to 5.
CN201810064803.1A 2018-01-23 2018-01-23 Jigsaw puzzle method and mobile terminal Active CN108460817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810064803.1A CN108460817B (en) 2018-01-23 2018-01-23 Jigsaw puzzle method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810064803.1A CN108460817B (en) 2018-01-23 2018-01-23 Jigsaw puzzle method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108460817A CN108460817A (en) 2018-08-28
CN108460817B true CN108460817B (en) 2022-04-12

Family

ID=63238678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810064803.1A Active CN108460817B (en) 2018-01-23 2018-01-23 Jigsaw puzzle method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108460817B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495616B (en) * 2018-11-30 2021-02-26 维沃移动通信(杭州)有限公司 Photographing method and terminal equipment
CN109598678B (en) * 2018-12-25 2023-12-12 维沃移动通信有限公司 Image processing method and device and terminal equipment
CN111359201B (en) * 2020-03-08 2023-08-15 北京智明星通科技股份有限公司 Jigsaw-type game method, system and equipment
CN113840169B (en) * 2020-06-23 2023-09-19 中国移动通信集团辽宁有限公司 Video processing method, device, computing equipment and storage medium
CN113194256B (en) * 2021-04-29 2023-04-25 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101535996A (en) * 2006-11-14 2009-09-16 皇家飞利浦电子股份有限公司 Method and apparatus for identifying an object captured by a digital image
CN102236890A (en) * 2010-05-03 2011-11-09 微软公司 Generating a combined image from multiple images
CN104504649A (en) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 Picture cutting method and device
CN106780325A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 A kind of picture joining method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106961559B (en) * 2017-03-20 2019-03-05 维沃移动通信有限公司 A kind of production method and electronic equipment of video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101535996A (en) * 2006-11-14 2009-09-16 皇家飞利浦电子股份有限公司 Method and apparatus for identifying an object captured by a digital image
CN102236890A (en) * 2010-05-03 2011-11-09 微软公司 Generating a combined image from multiple images
CN104504649A (en) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 Picture cutting method and device
CN106780325A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 A kind of picture joining method and mobile terminal

Also Published As

Publication number Publication date
CN108460817A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN108460817B (en) Jigsaw puzzle method and mobile terminal
CN109032734B (en) Background application program display method and mobile terminal
CN108494947B (en) Image sharing method and mobile terminal
CN109240577B (en) Screen capturing method and terminal
CN110674662B (en) Scanning method and terminal equipment
CN108494665B (en) Group message display method and mobile terminal
CN107943390B (en) Character copying method and mobile terminal
CN109409244B (en) Output method of object placement scheme and mobile terminal
CN109005336B (en) Image shooting method and terminal equipment
CN109495616B (en) Photographing method and terminal equipment
CN109388456B (en) Head portrait selection method and mobile terminal
CN107783709B (en) Image viewing method and mobile terminal
CN109523253B (en) Payment method and device
CN107728877B (en) Application recommendation method and mobile terminal
CN111080747B (en) Face image processing method and electronic equipment
CN107450744B (en) Personal information input method and mobile terminal
CN109669710B (en) Note processing method and terminal
CN108765522B (en) Dynamic image generation method and mobile terminal
CN111143614A (en) Video display method and electronic equipment
CN108062370B (en) Application program searching method and mobile terminal
CN110795002A (en) Screenshot method and terminal equipment
CN108459813A (en) A kind of searching method and mobile terminal
CN110045892B (en) Display method and terminal equipment
CN107844203B (en) Input method candidate word recommendation method and mobile terminal
CN108471549B (en) Remote control method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant