CN108366203B - Composition method, composition device, electronic equipment and storage medium - Google Patents

Composition method, composition device, electronic equipment and storage medium Download PDF

Info

Publication number
CN108366203B
CN108366203B CN201810170997.3A CN201810170997A CN108366203B CN 108366203 B CN108366203 B CN 108366203B CN 201810170997 A CN201810170997 A CN 201810170997A CN 108366203 B CN108366203 B CN 108366203B
Authority
CN
China
Prior art keywords
image
target image
composition
preset
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810170997.3A
Other languages
Chinese (zh)
Other versions
CN108366203A (en
Inventor
高嘉宏
曹莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jupiter Technology Co ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201810170997.3A priority Critical patent/CN108366203B/en
Publication of CN108366203A publication Critical patent/CN108366203A/en
Application granted granted Critical
Publication of CN108366203B publication Critical patent/CN108366203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a composition method, a composition device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image; determining at least one object included in the target image; determining a subject object from the determined at least one object; and taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle. Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.

Description

Composition method, composition device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a composition method, an apparatus, an electronic device, and a storage medium.
Background
Nowadays, the user can record life, record own experience through the mode of shooing, and along with the development of technique, the equipment of shooing is also more and more diversified, except traditional digital camera, professional single opposition machine, the cell-phone that nowadays more and more popularizes also takes the function of shooing certainly, has made things convenient for the daily demand of shooing of user more.
Gradually, the requirement of the user on the shot photos is higher and higher, and in order to meet the requirement of the user on the high quality of the photos, it is common practice to improve the quality of the photos by improving the hardware level of the shooting device, for example, improving the pixels of the photos to obtain the photos with higher definition. Since high-quality photos are not only related to the shooting device, but also related to factors such as the composition of the photos, the current shooting device can display an auxiliary line in a viewfinder to help the user to compose the photos when shooting so as to facilitate the user to compose the photos when shooting. Besides composition in the shooting process, the user can cut the shot picture at the later stage so as to fulfill the aim of recomposing the picture.
However, no matter composition is performed according to the auxiliary line in the shooting process or composition is performed by cutting in the later stage, composition needs to be performed through manual operation of a user, and the manual operation of composition increases operation steps in the process of shooting and obtaining high-quality photos, so that the problem that the operation process is complicated is caused.
Disclosure of Invention
An embodiment of the present invention provides a composition method, an apparatus, an electronic device, and a storage medium, so as to solve the problem that a user needs to manually perform composition, which results in complicated operation. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a patterning method, where the method includes:
acquiring a target image;
determining at least one object included in the target image;
determining a subject object from the determined at least one object;
and taking the position of the subject object in the target image as a subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle.
Optionally, the determining a subject object from the determined at least one object includes:
judging whether the determined at least one object comprises an object of a preset type, and if the determined at least one object comprises the object of the preset type, determining the object of the preset type in the determined object as a main object; alternatively, the first and second electrodes may be,
determining an object with the largest area in the determined at least one object as a main object; alternatively, the first and second electrodes may be,
according to an instruction for selecting an object from the determined at least one object, determining an object specified by the instruction as a subject object; alternatively, the first and second electrodes may be,
determining an object located in a preset area of the target image among the determined at least one object as a subject object; alternatively, the first and second electrodes may be,
and acquiring the proportion of the area of each object in the determined objects in the target image, and determining a main object from the objects with the proportion larger than a preset proportion threshold value.
Optionally, the determining, as a subject object, an object of a preset type in the determined objects includes:
when the number of the preset type objects included in the determined objects is one, directly determining the preset type objects as main objects; alternatively, the first and second electrodes may be,
determining an object of a preset type having a largest area as a subject object when the number of objects of the preset type included in the determined objects is at least two; alternatively, the first and second electrodes may be,
when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
Optionally, the composition rule includes at least one of a center composition rule, a third composition rule, and a golden section composition rule.
Optionally, the taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle includes:
determining a composition principle corresponding to the main object as a target composition principle according to a corresponding relation between a preset object belonging type and the composition principle;
and taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
Optionally, the taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle includes:
determining a composition principle with the largest image area obtained after composition processing is carried out on the target image from at least two preset composition principles as a target composition principle;
and taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
Optionally, the determining at least one object included in the target image includes:
identifying at least one object included in the target image by a neural network image semantic segmentation model.
Optionally, before the step of acquiring the target image, the method further includes:
acquiring a sample image, wherein the sample image comprises at least one marker object;
and training a preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting preset conditions.
Optionally, the performing composition processing on the target image according to a preset composition principle includes:
determining the length of each side length of a first image when the area of the first image is maximum based on the position of the subject in a preset composition principle, wherein the first image is as follows: performing composition processing on the target image according to a preset composition principle to obtain an image within the range of the target image;
and cutting the target image according to the determined length of each side to obtain a composition processing image.
Optionally, the performing composition processing on the target image according to a preset composition principle includes:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is: the second image exceeds the area of the target image;
filling the overflow area with the pre-acquired image information corresponding to the overflow area;
and splicing the filled overflow area and the overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
Optionally, after the step of acquiring the target image, the method further includes:
acquiring image information in a preset range around the target image;
the acquired image information is stored.
In a second aspect, an embodiment of the present invention provides a patterning device, the device comprising:
the first acquisition module is used for acquiring a target image;
a first determination module for determining at least one object included in the target image;
a second determination module for determining a subject object from the determined at least one object;
and the composition processing module is used for taking the position of the subject object in the target image as the subject position in a preset composition principle and performing composition processing on the target image according to the preset composition principle.
Optionally, the second determining module includes:
the first determining submodule is used for judging whether the determined at least one object comprises an object of a preset type, and if so, determining the object of the preset type in the determined object as a main object; alternatively, the first and second electrodes may be,
a second determining sub-module, configured to determine, as a subject object, an object with a largest region area among the determined at least one object; alternatively, the first and second electrodes may be,
a third determining submodule, configured to determine, according to an instruction for performing object selection from the determined at least one object, an object specified by the instruction as a subject object; alternatively, the first and second electrodes may be,
a fourth determining sub-module, configured to determine, as a subject object, an object located in a preset region of the target image among the determined at least one object; alternatively, the first and second electrodes may be,
and the fifth determining submodule is used for acquiring the proportion of the area of each object in the at least one determined object in the target image, and determining a main object from the objects with the proportion larger than a preset proportion threshold value.
Optionally, the first determining submodule is specifically configured to:
when the number of the preset type objects included in the determined objects is one, directly determining the preset type objects as main objects; alternatively, the first and second electrodes may be,
determining an object of a preset type having a largest area as a subject object when the number of objects of the preset type included in the determined objects is at least two; alternatively, the first and second electrodes may be,
when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
Optionally, the composition rule includes at least one of a center composition rule, a third composition rule, and a golden section composition rule.
Optionally, the composition processing module includes:
a sixth determining sub-module, configured to determine, according to a correspondence between a preset type of the subject and a composition principle, a composition principle corresponding to the subject as a target composition principle;
and the first composition processing submodule is used for taking the position of the subject object in the target image as the subject position in the target composition principle and performing composition processing on the target image according to the target composition principle.
Optionally, the composition processing module is specifically configured to:
determining a composition principle with the largest image area obtained after composition processing is carried out on the target image from at least two preset composition principles as a target composition principle;
and taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
Optionally, the first determining module is specifically configured to:
identifying at least one object included in the target image by a neural network image semantic segmentation model.
Optionally, the apparatus further comprises:
a second acquisition module for acquiring a sample image, wherein the sample image comprises at least one marker object;
and the training module is used for training a preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting the preset conditions.
Optionally, the composition processing module is specifically configured to:
determining the length of each side length of a first image when the area of the first image is maximum based on the position of the subject in a preset composition principle, wherein the first image is as follows: performing composition processing on the target image according to a preset composition principle to obtain an image within the range of the target image;
and cutting the target image according to the determined length of each side to obtain a composition processing image.
Optionally, the composition processing module is specifically configured to:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is: the second image exceeds the area of the target image;
filling the overflow area with the pre-acquired image information corresponding to the overflow area;
and carrying out image splicing on the filled overflow area and an overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring image information in a preset range around the target image;
and the storage module is used for storing the acquired image information.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and a processor for implementing any of the steps of the composition method described above when executing the program stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any one of the steps of the composition method described above.
In a fifth aspect, embodiments of the present invention provide a computer application program, which when run on a computer causes the computer to perform any one of the steps of the composition method described above.
In the technical scheme provided by the embodiment of the invention, a target image is obtained, and at least one object included in the target image is determined; determining a subject object from the determined at least one object; and taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle. Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a patterning method according to an embodiment of the present invention;
FIG. 2 is a target image provided by an embodiment of the present invention;
FIG. 3a is an original image of a target image according to an embodiment of the present invention;
FIG. 3b is a semantic segmentation processed image according to an embodiment of the present invention;
FIG. 4a is an original image of another target image according to an embodiment of the present invention;
FIG. 4b is a block diagram of another image after semantic segmentation according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a central composition principle provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a three-division composition principle provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating the golden section composition principle according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a patterning process provided by an embodiment of the present invention;
FIG. 9 is another schematic diagram of a patterning process provided by an embodiment of the invention;
FIG. 10 is a schematic diagram of a patterning device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of complex operation during composition processing and further improve the convenience of the composition processing, the embodiment of the invention provides a composition method. The composition method provided by the embodiment of the invention comprises the following steps:
acquiring a target image;
determining at least one object included in the target image;
determining a subject object from the determined at least one object;
and taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle.
Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
First, a patterning method according to an embodiment of the present invention is described below, and as shown in fig. 1, the patterning method according to the embodiment of the present invention includes the following steps:
s101, acquiring a target image.
The target image may be an image captured by a user from an image capturing device (such as a camera) during shooting, that is, the target image is an image generated after the user presses a photographing key and before the target image is presented to the user. Under the condition, the target image is reconstructed by the scheme, the reconstructed image after reconstruction processing is obtained, and then the reconstructed image is presented to the user.
In addition, the target image may be an existing image, and the existing target image is reconstructed according to the user's requirement. For example, the target image may be an image of years ago, and in this case, the target image of years ago is subjected to reconstruction processing; for another example, the target image is a photograph that is just taken and stored by the user using a mobile phone, and then the target image is immediately reconstructed.
S102, at least one object included in the target image is determined.
The object in the embodiment of the present invention may be each displayed thing in the image, for example, the object may be a person, an animal, an article, or the like.
Each object in the target image occupies a certain image area, and even if a plurality of objects included in the target image belong to the same category, each object occupies an area in the target image, and at this time, the target image can be considered to include at least one object at the same time.
For example, as shown in fig. 2, the object included in the target image is classified into a tree, a sky, a grass, and a person, and the classification of the person includes 4 persons, A, B, C and D, respectively, and A, B, C and D in the target image each occupy a region in the target image, so that the target image includes A, B, C and D as artificial objects.
In one embodiment, at least one object included in the target image may be identified by a neural network image semantic segmentation model. In a specific implementation manner, the neural network image semantic segmentation model may classify each pixel point in the target image, and identify the pixel points belonging to the same classification as the same color, so that the image obtained after semantic segmentation of the target image may be an image composed of regions of different colors.
For example, as shown in fig. 3a, the original image of the target image is shown, and 3b is an image obtained by subjecting the target image to semantic segmentation processing by a neural network image semantic segmentation model, wherein cars in the target image are all identified by the same color, and in fig. 3b, the area for identifying the color of the car includes 4 separate areas, so that the target image includes objects of 4 cars.
For the neural network image semantic segmentation model used, in one embodiment, the neural network image semantic segmentation model may be trained through a sample image. Specifically, a sample image may be acquired; and training a preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting the preset conditions.
Wherein the sample image comprises at least one marker object, which may be the same as the target object in an embodiment of the present invention. For example, fig. 2 may be taken as a sample image, and the marker objects in fig. 2 may include trees, sky, grass, and people.
When the neural network image semantic segmentation model carries out semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the marked object in the sample image is. The preset condition may be a custom setting.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the sample image is used for repeatedly training the neural network image semantic segmentation model, and when the classification accuracy of the neural network image semantic segmentation model on the pixel points reaches more than 70%, then the neural network image semantic segmentation model can be applied to the embodiment of the invention for performing semantic segmentation processing on the target image.
And classifying each pixel point in the target image, and identifying the pixel points belonging to the same classification as the same color.
S103, determining a subject object from the determined at least one object.
The subject object may be considered as an object that needs to be emphasized in the target image, and composition is performed on the subject object during composition, so that a finally obtained composition image highlights the subject object.
When at least two objects are included in the target image, the subject object may be determined according to a preset rule, and the following respective embodiments of determining the subject object are described:
in a first embodiment, it is determined whether the determined at least one object includes an object of a preset type, and if the determined at least one object includes an object of a preset type, the object of the preset type in the determined object is determined as a main object.
After the object is determined in step S102, it is determined whether the determined object includes an object of a preset type, and if the determined object includes an object of a preset type, the object of the preset type may be directly determined as a subject object; if no preset type of object is included, other embodiments of determining a subject object may be employed to determine a subject object.
The preset type of object may be set by a user. For example, if the object of the preset type may be a person, it is determined whether the object determined in the target image includes a person, and if the object includes a person, the person determined in the target image is taken as a subject object.
A second implementation manner, on the basis of the first implementation manner, when the determined at least one object includes an object of a preset type, further determining, according to the number of the objects of the preset type, implementation manners of the subject object including at least two of the following:
in a first implementation manner, when the number of the preset type objects included in the determined object is one, the one preset type object may be directly used as the subject object.
For example, if the preset type of object is a person and only one person a is included in the objects determined in the target image, the person a may be directly used as the subject object.
In a second implementation manner, when the number of the preset type objects included in the determined objects is at least two, the preset type object with the largest area is used as the subject object.
Specifically, after the objects included in the target image are determined, the area occupied by each object can be obtained. In this way, for the objects of the preset types included in the determined objects, the area occupied by each object of the preset type can also be obtained.
In this way, when there are at least two preset types of objects in the determined objects, the preset type of object having the largest area may be taken as the subject object.
For example, the preset type of object is a person, and the determined object in the target image includes three persons A, B and C, where the area occupied by a is larger than the area occupied by B and larger than the area occupied by C, so that it is known that a is the person with the largest area, and a can be directly used as the main object.
In a third implementation manner, when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
Specifically, after determining the objects included in the target image, the definition of each determined object in the target image may be analyzed, and the object with the highest definition of a preset type may be determined as the subject object.
Of course, when the number of the preset type objects is at least two, the method is not limited to the second implementation manner and the third implementation manner, and the subject object may be determined from the preset type objects according to other rules.
In the third embodiment, after at least one object included in the target image is determined in step S102, the area occupied by each object may be obtained, and the object with the largest area in the determined objects may be determined as the main object.
The area of the region may be expressed as an area actually occupied by each region in the target image, for example, if one of the regions included in the target image is a square with a side length of 1 cm, the area of the region may be expressed as 1 cm.
The area of the region may also be represented as the number of pixels constituting the region, for example, if one of the regions included in the target image is composed of 1000 pixels, the area of the region may be represented as 1000 pixels.
For example, fig. 4a shows an original image of the target image, and the image obtained by performing the semantic segmentation process is fig. 4b, and as shown in fig. 4b, the area of each object is divided and indicated by different colors, and the area of the area occupied by each object can be obtained. In this case, the region occupied by the target object, i.e., the table, is the largest, and the target image may be the table as the subject object.
In a fourth embodiment, the subject object may be specified by a user, and specifically, an instruction of the user may be received, the instruction being to select an object from the at least one determined object, and the object specified by the instruction is determined as the subject object.
The object specified by the user should be an object among the determined objects, for example, the object determined by the target image includes sky, grassland, people, and woods, and the object specified by the user is selected from the four objects of sky, grassland, people, and woods.
In one specific implementation, when a recompose operation is performed on a target image, a button may be set, and when the user presses the button, the mode specified by the user is entered, the user selects a specified object from the specified objects, and generates an instruction including the object specified by the user, and the device performing the recompose process parses the instruction to obtain the object specified by the instruction, and determines the specified object as a subject object.
In a fifth embodiment, an object located in a preset region of the target image among the determined at least one object is determined as a subject object.
The preset area can be set by self-definition, and in an implementation mode, the position can be preset, and the area range taking the position as the center is used as the preset area. For example, if the center position of the image is used as the preset position, and a circle with the center position as the center and a radius of 1 cm is used as the preset area, then the object in the circular area may be used as the subject object.
In a sixth implementation, the ratio of the area of each object in the determined at least one object to the target image is obtained, and the subject object is determined from the objects with the ratio larger than a preset ratio threshold.
As described above with reference to the third embodiment, the area of the region can be expressed in two ways: the actual area of the region and the number of pixels in the region.
Under the condition of the first representation mode, the actual area of the area occupied by each object and the actual area of the target image can be obtained; thus, the obtained proportion of each object is the ratio of the actual area of the region occupied by each object to the actual area of the target image.
Under the condition of the second expression mode, the number of pixel points in the area occupied by each object and the total number of the pixel points of the target image can be obtained; thus, the obtained proportion of each object is the ratio of the number of the pixel points in the area occupied by each object to the total number of the pixel points of the target image.
The obtained proportion may be represented by any one of the above two representation manners, so that after the proportion of each of the determined objects is obtained, the obtained proportion may be filtered by using a preset proportion threshold, where the preset proportion threshold may be set by a user.
In one implementation, the proportion larger than the preset proportion threshold is screened out from the obtained proportions, so that the object larger than the preset proportion threshold can be used as a candidate object, and the main object is also determined from the candidate object.
For example, the objects included in the target image are: the image selecting method comprises the following steps of sky, grassland, trees, a person A, a person B and a person C, wherein in a target image, the proportion of the sky is 20%, the proportion of the grassland is 20%, the proportion of the trees is 3%, the proportion of the person A is 30%, the proportion of the person B is 7%, the proportion of the person C is 20%, and a preset proportion threshold value is 10%, so that only the objects with the proportion larger than 10% can be used as objects to be selected. Therefore, the candidate objects include: sky, grass, person a, and person C, that is, the subject object is also selected from the sky, grass, person a, and person C.
With the sixth embodiment, it is advantageous to exclude the object with a smaller area and not meeting the standard, and leave the object with a larger area as the candidate of the subject object.
The method for determining the subject object may be any one of the above embodiments, and of course, other methods for determining the subject object may also be included, which are not limited herein.
And S104, taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle.
The preset composition rule may be set by a user, and the composition rule may include at least one of a center composition rule, a third composition rule, and a golden section composition rule.
For either composition principle, there is at least one subject position for setting the determined subject object such that the subject object is highlighted in the composition image.
For example, as shown in fig. 5, a schematic diagram of a center composition principle is shown, wherein the subject position in the center composition principle includes one of: the center position of the image, position a1, is the subject position of the center composition; as shown in fig. 6, a schematic diagram of the principle of the three-division composition is shown, wherein the subject positions in the principle of the three-division composition may include four: position B1, position B2, position B3 and position B4; referring to fig. 7, there is shown a schematic diagram of the golden section composition principle, wherein the subject position in the golden section composition principle comprises one of: position C1.
In addition, as for the golden section composition principle, as shown in fig. 7, the relationship of a and b satisfies the following formula:
Figure BDA0001585811580000141
the symbol "≈" indicates approximately equal to the symbol, and it is considered that the ratio of a to b may be a value around a value of 1.618.
For the preset composition principle, only one composition principle may be adopted, and at least two composition principles may also be adopted. The following description is made separately for two cases.
In the first case, the preset composition rule employs only one composition rule. At this time, the preset composition rule may adopt any one of a center composition rule, a third composition rule, and a golden section composition rule. No matter which one of the three composition principles is selected, the composition processing is carried out according to the selected composition principle.
For example, the preset composition principle is a central composition principle, wherein the subject position of the central composition principle is the central position of the image, the position of the subject in the target image is taken as the central position, and composition processing is performed on the target image according to the central composition principle. The center position of the resulting composition image is a subject object.
Of course, in addition to the above three composition principles, other applicable composition principles can be adopted, and are not limited herein.
In the second case, the preset composition rule employs at least two composition rules. At this time, the preset composition rule may adopt at least two of a center composition rule, a third composition rule, and a golden section composition rule.
In this case, each of the at least two preset composition principles may perform composition processing on the target image, so as to obtain a corresponding composition image. Of course, different composition principles are applied to the same target image, and the resulting composition image may be different. Therefore, in the following embodiments, it is described how to select one of at least two preset composition principles to perform composition processing on a target image.
In the seventh embodiment, the step of setting the position of the subject in the target image as the subject position in the preset composition principle and performing composition processing on the target image according to the preset composition principle (S104) may include the steps of:
1. determining a composition principle corresponding to the main object as a target composition principle according to a corresponding relation between a preset object belonging type and the composition principle;
2. and taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
The following describes two steps in the seventh embodiment.
Step 1, the type of the object can be regarded as the classification of the object, for example, when the object is a person, the type of the object is a person; when the object is a tree, the type of the object is a tree.
The corresponding relationship between the type of the subject and the composition rule may be set by a user, and specifically, the corresponding relationship may be set as a one-to-one corresponding relationship, for example, when the type of the subject is a human, the corresponding relationship corresponds to the golden section composition rule one to one, that is, when the subject is a human, the golden section composition rule is only used for composition processing, and the golden section composition rule is only used when the subject is a human.
In addition, many-to-one correspondence can be adopted in the correspondence, that is, the types to which various objects belong can correspond to the same composition principle. For example, when the type of the object is a table, a bookshelf, or a television, the center composition principle corresponds to the object.
After the main object is determined, the composition principle corresponding to the main object can be further determined according to the preset corresponding relation between the type of the object and the composition principle.
And 2, performing composition processing on the target image according to the determined target composition principle, wherein the position of the subject object is set as the subject position in the target composition principle, and then the composition processing can be performed on the target image.
For example, if the type of the subject person is a person, the corresponding composition rule is a golden section composition rule, and the subject position in the golden section composition rule is a golden section position, that is, position C1 in fig. 7, the position of the subject person in the target image may be taken as the golden section position, and then the target image may be subjected to composition processing according to the golden section composition rule.
In an eighth embodiment, the step of setting the position of the subject in the target image as the subject position in the preset composition principle, and performing composition processing on the target image according to the preset composition principle (S104), may include the steps of:
a. and determining a composition principle with the largest image area obtained after composition processing is carried out on the target image from at least two preset composition principles as a target composition principle.
b. And taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
In this embodiment, the preset composition rule includes at least two kinds, and for example, may be at least two kinds of a center composition rule, a third composition rule, and a golden section composition rule.
In the case of determining the subject object, the subject position may be determined for the subject object by using each preset composition principle, and then the composition processing may be performed separately.
For example, the preset composition principle includes two kinds: and the central composition principle and the golden section composition principle are used for performing composition processing on the target image respectively. Specifically, first, the position of the subject object is determined as the subject position in the composition rule. For the central composition principle, determining the position of a subject in a target image as the central position of the central composition principle, wherein the subject position is the central position; for the golden section composition principle, the subject position is the golden section position, and the position of the subject in the target image is determined as the golden section position of the golden section composition principle.
For different composition principles, different composition images can be obtained by performing composition processing on the same image. Therefore, each preset composition principle is used for performing composition processing on the target image respectively, and different composition images can be obtained. And the areas of the different patterned images may also be different.
For example, the preset composition principle includes two kinds: the central composition principle and the golden section composition principle are respectively used for performing composition processing on the target image according to the central composition principle and the golden section composition principle, so that different composition images can be respectively obtained: the image A is the composition image obtained by the central composition principle, and the image B is the composition image obtained by the golden section composition principle; also, the areas of image a and image B may be different.
The area of the image after the composition processing is performed on the target image by each composition principle, that is, the area of the composition image, can be obtained in two ways: the first mode is that a corresponding composition image is obtained after composition processing is carried out on a target image through a composition principle, so that the area of the composition image can be obtained; in the second mode, the area of the composition image after the composition processing can be calculated according to the data generated during the composition processing according to each composition principle.
After the areas of the composition images corresponding to the preset composition principles are obtained, the composition image with the largest area can be selected from the obtained areas, and the image with the largest area is determined as the last selected composition image.
For example, the preset composition principle includes two kinds: and performing composition processing on the target image by using the central composition principle and the golden section composition principle respectively according to the central composition principle and the golden section composition principle, wherein the area of the obtained image processed by using the central composition principle is 5 square centimeters, the area of the obtained image processed by using the golden section composition principle is 6 square centimeters, and the composition image obtained by processing the golden section composition principle is used as the composition image corresponding to the target image.
By the embodiment, a plurality of composition principles are provided, so that an optimal composition principle can be selected from the plurality of composition principles to perform composition processing on the target image, and a composition image with the largest area can be obtained.
In a ninth implementation manner, the step of performing composition processing on the target image according to a preset composition principle may include the following steps:
determining the length of each side length of the first image when the area of the first image is maximum based on the position of the main body object in a preset composition principle, wherein the first image is as follows: performing composition processing on a target image according to a preset composition principle to obtain an image within a target image range;
and cutting the target image according to the determined length of each side to obtain a composition processing image.
And under the premise that the main body object is placed at the main body position, the image with the largest area obtained in the target image range is the first image.
For example, as shown in fig. 8, 1 is a target image, a circle is a subject, a subject position in a preset composition rule is a right center position, and when the circle is placed at the right center position within a range of 1, a plurality of images with different sizes, such as 2, 3, 4, etc., can be obtained. However, since the area is 2 at the maximum, the target image 1 is cut in a size of 2, and a composition-processed image having a size of 2 is obtained.
In a tenth embodiment, the step of performing composition processing on the target image according to a preset composition principle may include the steps of:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is as follows: the second image exceeds the area of the target image;
filling image information corresponding to the overflow area acquired in advance into the overflow area;
and splicing the filled overflow area and the overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
This embodiment will be described below with reference to fig. 9.
As shown in fig. 9, a circle is a subject object, 1 is a target image, and the circle is located at the center position in the target image. The subject position in the preset composition rule is one third of the left side of the image. Thus, the determined second image is 2 in fig. 9 based on the subject object being located in the left third of the image, i.e. the circle is located in the left third of the second image 2.
The second image 2 is the same size as the target image 1, and the second image 2 is compared with the target image 1, and it can be considered that the second image 2 is obtained by shifting the target image 1 to the right. The area where the second image 2 and the target image 1 are partially overlapped is c, that is, c is an overlapped area. The region where the second image 2 does not overlap with the target image 1 includes a region a and a region b, where the region b is an overflow region and the size of the region b is the same as that of the region a.
The target image 1 includes image information, i.e., information on which an image can be displayed, such as a person, a tree, a car, and the like. Whereas for overflow area b, the overflow area b is out of the range of the target image 1. In order to maintain the consistency of the image information presented by the reconstruction processing image, the image information included in the overflow area b may be image information acquired around the image information included in the target image 1 when the target image 1 is acquired. In this way, the image information in the overflow area b is consecutive to the image information of the immediately adjacent overlapping area c.
For the acquisition of the image information around the target image, in an embodiment, after the step of acquiring the target image, the method may further include: acquiring image information in a preset range around a target image; the acquired image information is stored.
This will be explained with reference to fig. 9. When the target image 1 is acquired, image information within a preset range around the target image 1 is acquired, and the acquired image information is stored. Specifically, two image capturing devices (such as cameras) may be provided, where one of the image capturing devices is used to capture image information included in the target image 1, and the obtained image is the target image 1; another image capturing device is used to capture image information around the target image 1. The preset range can be set by self-definition.
And after determining the overflow area b and filling the image information corresponding to the overflow area b into the overflow area b, splicing the filled overflow area b and the overlapped area c to form a third image.
According to the embodiment, the size of the second image obtained after the composition processing is carried out on the target image is kept the same as that of the target image, and the image information included in the second image is consistent, so that the user experience is improved.
According to the technical scheme provided by the embodiment of the invention, the target image is obtained, and the object included in the target image is determined; determining a subject object from the determined objects; and taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle. Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a patterning device, as shown in fig. 10, the device including:
a first obtaining module 1010, configured to obtain a target image;
a first determining module 1020 for determining at least one object included in the target image;
a second determining module 1030 configured to determine a subject object from the determined at least one object;
the composition processing module 1040 is configured to use the position of the subject object in the target image as a subject position in a preset composition principle, and perform composition processing on the target image according to the preset composition principle.
Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
Optionally, in an embodiment, the second determining module 1030 may include:
the first determining submodule is used for judging whether the determined at least one object comprises an object of a preset type, and if so, determining the object of the preset type in the determined object as a main object; alternatively, the first and second electrodes may be,
a second determining sub-module, configured to determine, as a subject object, an object with a largest region area among the determined at least one object; alternatively, the first and second electrodes may be,
a third determining submodule, configured to determine, according to an instruction for performing object selection from the determined at least one object, an object specified by the instruction as a subject object; alternatively, the first and second electrodes may be,
a fourth determining sub-module, configured to determine, as a subject object, an object located in a preset region of the target image among the determined at least one object; alternatively, the first and second electrodes may be,
and the fifth determining submodule is used for acquiring the proportion of the area of each object in the at least one determined object in the target image, and determining a main object from the objects with the proportion larger than a preset proportion threshold value.
Optionally, in an embodiment, the first determining submodule may be specifically configured to:
when the number of the preset type objects included in the determined objects is one, directly determining the preset type objects as main objects;
alternatively, the first and second electrodes may be,
determining an object of a preset type having a largest area as a subject object when the number of objects of the preset type included in the determined objects is at least two; alternatively, the first and second electrodes may be,
when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
Alternatively, in one embodiment, the composition rule includes at least one of a center composition rule, a third composition rule, and a golden section composition rule.
Optionally, in an embodiment, the patterning process module 1040 may include:
a sixth determining sub-module, configured to determine, according to a correspondence between a preset type of the subject and a composition principle, a composition principle corresponding to the subject as a target composition principle;
and the first composition processing submodule is used for taking the position of the subject object in the target image as the subject position in the target composition principle and performing composition processing on the target image according to the target composition principle.
Optionally, in an embodiment, the composition processing module 1040 may be specifically configured to:
determining a composition principle with the largest image area obtained after composition processing is carried out on the target image from at least two preset composition principles as a target composition principle;
and taking the position of the subject object in the target image as the subject position in the target composition principle, and performing composition processing on the target image according to the target composition principle.
Optionally, in an embodiment, the first determining module 1020 is specifically configured to:
identifying at least one object included in the target image by a neural network image semantic segmentation model.
Optionally, in an embodiment, the apparatus may further include:
the second acquisition module is used for acquiring a sample image, wherein the sample image comprises at least one marked object;
and the training module is used for training the preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting the preset conditions.
Optionally, in an embodiment, the composition processing module 1040 is specifically configured to:
determining the length of each side length of the first image when the area of the first image is maximum based on the position of the main body object in a preset composition principle, wherein the first image is as follows: performing composition processing on a target image according to a preset composition principle to obtain an image within a target image range;
and cutting the target image according to the determined length of each side to obtain a composition processing image.
Optionally, in an embodiment, the composition processing module 1040 is specifically configured to:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is as follows: the second image exceeds the area of the target image;
filling image information corresponding to the overflow area acquired in advance into the overflow area;
and carrying out image splicing on the filled overflow area and the overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
Optionally, in an embodiment, the apparatus may further include:
the third acquisition module is used for acquiring image information in a preset range around the target image;
and the storage module is used for storing the acquired image information.
Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
The embodiment of the present invention further provides an electronic device, as shown in fig. 11, which includes a processor 1110, a communication interface 1120, a memory 1130, and a communication bus 1140, wherein the processor 1110, the communication interface 1120, and the memory 1130 complete mutual communication through the communication bus 1140,
a memory 1130 for storing computer programs;
the processor 1110, when executing the program stored in the memory 1130, implements the following steps:
acquiring a target image;
determining at least one object included in the target image;
determining a subject object from the determined at least one object;
and taking the position of the subject object in the target image as the subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle.
Through the technical scheme provided by the embodiment of the invention, the main body object can be determined from the target image, and the target image is subjected to recomposition processing according to the determined main body object and a preset composition principle. Thus, for the user, the image can still be reconstructed under the condition of avoiding manual operation, and the convenience of composition processing is further improved.
Of course, an electronic device provided in the embodiment of the present invention may further perform a composition method described in any of the above embodiments. Specifically, see fig. 1 and the embodiment corresponding to fig. 1, which are not described herein again.
In another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform a composition method as described in any one of the embodiments corresponding to fig. 1 and 1.
The embodiment of the invention also provides a computer application program, and when the computer application program runs on a computer, the computer is enabled to execute a composition method in any one of the above embodiments.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first", "second", "third", etc. may be used to describe various connection ports and identification information, etc. in the embodiments of the present application, these connection ports and identification information, etc. should not be limited to these terms. These terms are only used to distinguish the connection port and the identification information and the like from each other. For example, the first connection port may also be referred to as a second connection port, and similarly, the second connection port may also be referred to as a first connection port, without departing from the scope of embodiments of the present application.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method of patterning, the method comprising:
acquiring a target image;
determining at least one object included in the target image;
determining a subject object from the determined at least one object;
taking the position of the subject object in the target image as a subject position in a preset composition principle, and performing composition processing on the target image according to the preset composition principle;
the composition processing of the target image according to a preset composition principle includes:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is: the second image exceeds the area of the target image;
filling the overflow area with the pre-acquired image information corresponding to the overflow area;
and splicing the filled overflow area and the overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
2. The method of claim 1, wherein determining a subject object from the determined at least one object comprises:
judging whether the determined at least one object comprises an object of a preset type, and if the determined at least one object comprises the object of the preset type, determining the object of the preset type in the determined object as the main object; alternatively, the first and second electrodes may be,
determining an object with the largest area in the determined at least one object as the subject object; alternatively, the first and second electrodes may be,
according to an instruction for selecting an object from the determined at least one object, determining an object specified by the instruction as the subject object; alternatively, the first and second electrodes may be,
determining an object located in a preset region of the target image among the determined at least one object as the subject object; alternatively, the first and second electrodes may be,
and acquiring the proportion of the area of each object in the at least one determined object in the target image, and determining the main object from the objects with the proportion larger than a preset proportion threshold value.
3. The method according to claim 2, wherein the determining of the preset type of object in the determined objects as the subject object comprises:
when the number of the preset type objects included in the determined objects is one, directly determining the preset type objects as main objects; alternatively, the first and second electrodes may be,
determining an object of a preset type having a largest area as a subject object when the number of objects of the preset type included in the determined objects is at least two; alternatively, the first and second electrodes may be,
when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
4. The method according to claim 1, wherein said composition rule comprises at least one of a center composition rule, a third composition rule and a golden section composition rule.
5. The method of claim 1, wherein the determining at least one object included in the target image comprises:
identifying at least one object included in the target image by a neural network image semantic segmentation model.
6. The method of claim 5, further comprising, prior to said acquiring a target image:
acquiring a sample image, wherein the sample image comprises at least one marker object;
and training a preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting preset conditions.
7. The method of claim 1, further comprising, after the step of acquiring a target image:
acquiring image information in a preset range around the target image;
the acquired image information is stored.
8. A patterning device, the device comprising:
the first acquisition module is used for acquiring a target image;
a first determination module for determining at least one object included in the target image;
a second determination module for determining a subject object from the determined at least one object;
the composition processing module is used for taking the position of the subject object in the target image as a subject position in a preset composition principle and performing composition processing on the target image according to the preset composition principle;
wherein the composition processing module is specifically configured to:
determining a second image based on the position of the subject in a preset composition principle, wherein the size of the second image is the same as that of the target image;
comparing the second image with the target image to determine an overflow area, wherein the overflow area is: the second image exceeds the area of the target image;
filling the overflow area with the pre-acquired image information corresponding to the overflow area;
and carrying out image splicing on the filled overflow area and an overlapped area to form a third image, wherein the size of the third image is the same as that of the target image, and the overlapped area is an area where the second image is overlapped with the target image.
9. The apparatus of claim 8, wherein the second determining module comprises:
the first determining submodule is used for judging whether the determined at least one object comprises an object of a preset type, and if so, determining the object of the preset type in the determined object as the main object; alternatively, the first and second electrodes may be,
a second determining sub-module, configured to determine, as the subject object, an object with a largest region area in the determined at least one object; alternatively, the first and second electrodes may be,
a third determining submodule, configured to determine, according to an instruction for performing object selection from the determined at least one object, an object specified by the instruction as the subject object; alternatively, the first and second electrodes may be,
a fourth determining sub-module, configured to determine, as the subject object, an object located in a preset region of the target image among the determined at least one object; alternatively, the first and second electrodes may be,
and the fifth determining submodule is used for acquiring the proportion of the area of each object in the determined at least one object in the target image, and determining the main object from the objects with the proportion larger than a preset proportion threshold value.
10. The apparatus of claim 9, wherein the first determination submodule is specifically configured to:
when the number of the preset type objects included in the determined objects is one, directly determining the preset type objects as main objects; alternatively, the first and second electrodes may be,
determining an object of a preset type having a largest area as a subject object when the number of objects of the preset type included in the determined objects is at least two; alternatively, the first and second electrodes may be,
when the number of the preset type objects included in the determined objects is at least two, the preset type object with the highest definition is determined as the subject object.
11. The apparatus according to claim 8, wherein said composition rule comprises at least one of a center composition rule, a third composition rule and a golden section composition rule.
12. The apparatus of claim 8, wherein the first determining module is specifically configured to:
identifying at least one object included in the target image by a neural network image semantic segmentation model.
13. The apparatus of claim 8, further comprising:
a second acquisition module for acquiring a sample image, wherein the sample image comprises at least one marker object;
and the training module is used for training a preset neural network image semantic segmentation model by using the sample image to obtain the neural network image semantic segmentation model meeting the preset conditions.
14. The apparatus of claim 8, further comprising:
the third acquisition module is used for acquiring image information in a preset range around the target image;
and the storage module is used for storing the acquired image information.
15. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201810170997.3A 2018-03-01 2018-03-01 Composition method, composition device, electronic equipment and storage medium Active CN108366203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810170997.3A CN108366203B (en) 2018-03-01 2018-03-01 Composition method, composition device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810170997.3A CN108366203B (en) 2018-03-01 2018-03-01 Composition method, composition device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108366203A CN108366203A (en) 2018-08-03
CN108366203B true CN108366203B (en) 2020-10-13

Family

ID=63003053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810170997.3A Active CN108366203B (en) 2018-03-01 2018-03-01 Composition method, composition device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108366203B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151320B (en) * 2018-09-29 2022-04-22 联想(北京)有限公司 Target object selection method and device
CN113163133A (en) * 2018-10-15 2021-07-23 华为技术有限公司 Image processing method, device and equipment
CN109587394A (en) * 2018-10-23 2019-04-05 广东智媒云图科技股份有限公司 A kind of intelligence patterning process, electronic equipment and storage medium
CN110298380A (en) * 2019-05-22 2019-10-01 北京达佳互联信息技术有限公司 Image processing method, device and electronic equipment
CN110519509A (en) * 2019-08-01 2019-11-29 幻想动力(上海)文化传播有限公司 Composition evaluation method, method for imaging, device, electronic equipment, storage medium
CN110807955B (en) * 2019-11-01 2020-10-23 诸暨山争网络科技有限公司 Real-time driving route switching platform and method based on data capture
CN111368698B (en) * 2020-02-28 2024-01-12 Oppo广东移动通信有限公司 Main body identification method, main body identification device, electronic equipment and medium
CN114025097B (en) * 2020-03-09 2023-12-12 Oppo广东移动通信有限公司 Composition guidance method, device, electronic equipment and storage medium
CN111432122B (en) * 2020-03-30 2021-11-30 维沃移动通信有限公司 Image processing method and electronic equipment
CN116998159A (en) * 2021-03-04 2023-11-03 Oppo广东移动通信有限公司 Method for suggesting shooting position of electronic equipment and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559717A (en) * 2013-11-14 2014-02-05 上海华勤通讯技术有限公司 Shooting preview composition assisting method and device for shooting equipment
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
CN105578035A (en) * 2015-12-10 2016-05-11 联想(北京)有限公司 Image processing method and electronic device
CN106131411A (en) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 A kind of method and apparatus shooting image
CN107493426A (en) * 2017-07-05 2017-12-19 努比亚技术有限公司 A kind of information collecting method, equipment and computer-readable recording medium
CN107743193A (en) * 2017-09-26 2018-02-27 深圳市金立通信设备有限公司 Picture editor's way choice method, terminal and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5115139B2 (en) * 2007-10-17 2013-01-09 ソニー株式会社 Composition determination apparatus, composition determination method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559717A (en) * 2013-11-14 2014-02-05 上海华勤通讯技术有限公司 Shooting preview composition assisting method and device for shooting equipment
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
CN105578035A (en) * 2015-12-10 2016-05-11 联想(北京)有限公司 Image processing method and electronic device
CN106131411A (en) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 A kind of method and apparatus shooting image
CN107493426A (en) * 2017-07-05 2017-12-19 努比亚技术有限公司 A kind of information collecting method, equipment and computer-readable recording medium
CN107743193A (en) * 2017-09-26 2018-02-27 深圳市金立通信设备有限公司 Picture editor's way choice method, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN108366203A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108366203B (en) Composition method, composition device, electronic equipment and storage medium
CN108765278B (en) Image processing method, mobile terminal and computer readable storage medium
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
CN103617432B (en) A kind of scene recognition method and device
CN109325954B (en) Image segmentation method and device and electronic equipment
CN108206917B (en) Image processing method and device, storage medium and electronic device
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
JP4772839B2 (en) Image identification method and imaging apparatus
CN104486552B (en) A kind of method and electronic equipment obtaining image
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN107622497B (en) Image cropping method and device, computer readable storage medium and computer equipment
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
EP2797051A2 (en) Image saliency map determination device, method, program, and recording medium
CN111131688B (en) Image processing method and device and mobile terminal
CN108737875B (en) Image processing method and device
CN109089041A (en) Recognition methods, device, electronic equipment and the storage medium of photographed scene
CN112036209A (en) Portrait photo processing method and terminal
CN110691226A (en) Image processing method, device, terminal and computer readable storage medium
CN110855876B (en) Image processing method, terminal and computer storage medium
CN111582045B (en) Living body detection method and device and electronic equipment
CN104951440B (en) Image processing method and electronic equipment
CN107977437B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111563439B (en) Aquatic organism disease detection method, device and equipment
CN111046747B (en) Crowd counting model training method, crowd counting method, device and server
CN111669492A (en) Method for processing shot digital image by terminal and terminal

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201201

Address after: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing 100123

Patentee after: Beijing LEMI Technology Co.,Ltd.

Address before: 100123 Building 8, Huitong Times Square, 1 South Road, Chaoyang District, Beijing.

Patentee before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230831

Address after: 3870A, 3rd Floor, Building 4, Courtyard 49, Badachu Road, Shijingshan District, Beijing, 100144

Patentee after: Beijing Jupiter Technology Co.,Ltd.

Address before: 100123 room 115, area C, 1st floor, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee before: Beijing LEMI Technology Co.,Ltd.