CN111063001A - Picture synthesis method and device, electronic equipment and storage medium - Google Patents

Picture synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111063001A
CN111063001A CN201911309794.9A CN201911309794A CN111063001A CN 111063001 A CN111063001 A CN 111063001A CN 201911309794 A CN201911309794 A CN 201911309794A CN 111063001 A CN111063001 A CN 111063001A
Authority
CN
China
Prior art keywords
picture
size
target
blank
synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911309794.9A
Other languages
Chinese (zh)
Other versions
CN111063001B (en
Inventor
张国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201911309794.9A priority Critical patent/CN111063001B/en
Publication of CN111063001A publication Critical patent/CN111063001A/en
Application granted granted Critical
Publication of CN111063001B publication Critical patent/CN111063001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Abstract

The application discloses a picture synthesis method, a picture synthesis device, electronic equipment and computer readable storage, wherein the method comprises the following steps: acquiring a first picture and a second picture to be synthesized; calculating to obtain the target size of a second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at the corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture. The method does not need repeated mapping, reduces the occupation of resource memory, reduces the coupling, reduces the workload of picture replacement, and avoids misoperation caused by picture replacement.

Description

Picture synthesis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image synthesis method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The popularization activities based on the internet often adopt a form of spreading pictures on a webpage or a social platform, and the pictures are usually generated by combining shared pictures with other pictures. For example, a picture to be shared in a game is shared to the outside by adding a two-dimensional code picture.
Generally, since scenes in which pictures to be shared are located are diversified, the size of the pictures to be shared is not fixed and unpredictable, and the content of the pictures to be added (e.g., two-dimensional codes) is unique. In the related art, the same picture content is made into pictures with different sizes, so that the synthesis requirement is met. However, this method results in large memory usage and severe coupling, as well as increased possibility of operation errors.
Disclosure of Invention
The object of the present application is to solve at least to some extent one of the above mentioned technical problems.
Therefore, a first objective of the present application is to provide a picture synthesis method, which can avoid the need to manufacture different sizes for a second picture with the same content to meet the requirements of various synthesis scenes, avoid repeated drawing, reduce the occupation of resource memory, reduce the coupling, reduce the workload of continuously replacing pictures during picture synthesis in various synthesis scenes, and avoid misoperation caused by picture replacement.
A second object of the present application is to provide a picture synthesizing apparatus.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a computer readable storage medium.
To achieve the above object, a first aspect of the present application provides a picture synthesis method, including: acquiring a first picture and a second picture to be synthesized; calculating to obtain a target size of the second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at a corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture.
According to the picture synthesis method, a first picture and a second picture to be synthesized are obtained; calculating to obtain a target size of the second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at a corresponding position from a second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture. The method comprises the steps of carrying out scaling processing on the size of a second picture according to the size of a first picture to obtain a target size, generating a blank picture according to the size of the first picture and the target size, assigning the second picture and color information of the first picture to the blank picture to obtain a synthetic picture, and therefore, the picture synthesis mode can avoid the problem that the second picture with the same content needs to be manufactured into different sizes to meet the requirements of various synthetic scenes, does not need to be repeatedly drawn, reduces the occupation of resource memory, reduces the coupling performance, reduces the workload of picture replacement during picture synthesis in various synthetic scenes, and avoids operation errors caused during picture replacement.
According to an embodiment of the present application, calculating the target size of the second picture according to the first size of the first picture includes: determining a target synthesis mode for the first picture and the second picture; calculating the size of the second picture according to the target synthesis mode and the first size of the first picture to obtain the target size of the second picture; generating a blank picture according to the first size of the first picture and the target size of the second picture, wherein the generating of the blank picture comprises the following steps: and generating a blank picture according to the target synthesis mode, the first size of the first picture and the target size of the second picture.
According to an embodiment of the present application, calculating the size of the second picture according to the target synthesis manner and the first size of the first picture to obtain the target size of the second picture includes: when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, taking the width value in the first size as the width value of the target size; calculating a first scaling according to the width value of the second picture and the width value in the first size; calculating a height value of the target size according to the first scaling and the height value of the second picture; determining the target size according to the width value of the target size and the height value of the target size; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value in the first size as the height value of the target size; calculating a second scaling according to the height value of the second picture and the height value in the first size; calculating the width value of the target size according to the second scaling and the width value of the second picture; and determining the target size according to the height value of the target size and the width value of the target size.
According to an embodiment of the present application, generating a blank picture according to the target synthesis manner, the first size of the first picture, and the target size of the second picture includes: when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, determining the width value of a blank picture to be generated according to the width value in the first size; determining the height value of the blank picture to be generated according to the height value in the first size and the height value in the target size; generating a blank picture with a corresponding size according to the width value and the height value of the blank picture to be generated; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value of a blank picture to be generated according to the height value in the first size; determining the width value of the blank picture to be generated according to the width value in the first size and the width value in the target size; and generating a blank picture with a corresponding size according to the height value and the width value of the blank picture to be generated.
According to an embodiment of the present application, extracting color information at a corresponding position from the second picture according to the region range includes: determining the coordinates of each pixel point in the area range; determining the coordinates of the pixel points at the corresponding positions in the second picture according to the coordinates of the pixel points in the area range; and extracting the color information of the corresponding pixel point from the second picture according to the coordinate of the pixel point at the corresponding position.
According to an embodiment of the present application, the generating a target composite picture according to the extracted color information, the first picture and the blank picture includes: assigning the extracted color information to the corresponding position of the blank picture according to the pixel point coordinate corresponding to the extracted color information; determining the position information of the first picture in the blank picture according to the first size of the first picture; and assigning color information at a corresponding position in the first picture to the blank picture according to the position information of the first picture in the blank picture to obtain the target synthetic picture.
To achieve the above object, a second aspect of the present application provides a picture composition apparatus, including: the picture acquisition module is used for acquiring a first picture and a second picture to be synthesized; the size scaling module is used for calculating the target size of the second picture according to the first size of the first picture; the picture generation module is used for generating a blank picture according to the first size of the first picture and the target size of the second picture; the area range determining module is used for determining the area range of the second picture in the blank picture according to the target size of the second picture; the color information extraction module is used for extracting color information on a corresponding position from the second picture according to the area range; and the picture synthesis module is used for generating a target synthesis picture according to the extracted color information, the first picture and the blank picture.
The picture synthesis device of the embodiment of the application obtains a first picture and a second picture to be synthesized; calculating to obtain a target size of the second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at a corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture. The device can realize that the size of the second picture is zoomed according to the size of the first picture to obtain the target size, a blank picture is generated according to the size of the first picture and the target size, the second picture and the color information of the first picture are assigned to the blank picture to obtain a synthesized picture, so that the picture synthesizing mode can avoid that different sizes are required to be manufactured for the second picture with the same content to meet the requirements of various synthesized scenes, repeated drawing is not needed, the occupation of resource memory is reduced, the coupling is reduced, the workload of continuously replacing pictures during picture synthesis in various synthesized scenes is reduced, and misoperation caused when the pictures are replaced is avoided.
According to one embodiment of the application, the size scaling module comprises: a synthesis mode determination unit configured to determine a target synthesis mode for the first picture and the second picture; the size scaling unit is used for calculating the size of the second picture according to the target synthesis mode and the first size of the first picture so as to obtain the target size of the second picture; the picture generation module is specifically configured to: and generating a blank picture according to the target synthesis mode, the first size of the first picture and the target size of the second picture.
According to an embodiment of the application, the size scaling unit is specifically configured to: when the target synthesis mode is that the second picture is spliced to the bottom edge or the top edge of the first picture, determining the width value in the first size as the width value of the target size; calculating a first scaling according to the width value of the second picture and the width value in the first size; calculating a height value of the target dimension based on the first scaling and a height value in the first dimension; determining the target size according to the width value of the target size and the height value of the target size; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value in the first size as the height value of the target size; calculating a second scaling according to the height value of the second picture and the height value in the first size; calculating a width value of the target size according to the second scaling and the width value in the first size; and determining the target size according to the height value of the target size and the width value of the target size.
According to an embodiment of the present application, the picture generation module is specifically configured to: when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, determining the width value of a blank picture to be generated according to the width value in the first size; determining the height value of the blank picture to be generated according to the height value in the first size and the height value in the target size; generating a blank picture with a corresponding size according to the width value and the height value of the blank picture to be generated; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value of a blank picture to be generated according to the height value in the first size; determining the width value of the blank picture to be generated according to the width value in the first size and the width value in the target size; and generating a blank picture with a corresponding size according to the height value and the width value of the blank picture to be generated.
According to an embodiment of the present application, the color information extraction module is specifically configured to: determining the coordinates of each pixel point in the area range; determining the coordinates of the pixel points at the corresponding positions in the second picture according to the coordinates of the pixel points in the area range; and extracting the color information of the corresponding pixel point from the second picture according to the coordinate of the pixel point at the corresponding position.
According to an embodiment of the present application, the picture synthesis module is specifically configured to: assigning the extracted color information to the corresponding position of the blank picture according to the pixel point coordinate corresponding to the extracted color information; determining the position information of the first picture in the blank picture according to the first size of the first picture; and assigning color information at a corresponding position in the first picture to the blank picture according to the position information of the first picture in the blank picture to obtain the target synthetic picture.
To achieve the above object, a third aspect of the present application provides an electronic device, including: the picture synthesis method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the picture synthesis method is realized.
To achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the picture synthesis method according to the first aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a picture synthesis method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a picture synthesis method according to another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a picture synthesis method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a picture composition apparatus according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a picture synthesis apparatus according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Fig. 1 is a flowchart illustrating a picture synthesis method according to an embodiment of the present application.
As shown in fig. 1, the picture synthesis method includes the following steps:
step 101, a first picture and a second picture to be synthesized are obtained.
In the embodiment of the application, the first picture and the second picture can be obtained from a picture material library, downloaded from a network, intercepted from related data or provided by a technician, and the like. Preferably, the first picture is a picture to be shared, such as a picture of a person, a landscape, an animal, and the like. The second picture is a two-dimensional code picture, for example, a merchant collection two-dimensional code, a personal information two-dimensional code, and the like.
And 102, calculating to obtain the target size of the second picture according to the first size of the first picture.
Optionally, according to the first size of the first picture, performing scaling processing on the size of the second picture to make the height value of the target size obtained after the size scaling processing consistent with the height value of the first picture size, and performing scaling processing on the width value of the second picture in the same proportion to obtain the target size of the second picture; or, the width value of the target size obtained after the size scaling processing is consistent with the width value of the first picture size, and the height value of the second picture is scaled in the same proportion, so that the target size of the second picture can be obtained. In the embodiment of the present application, the scaling processing is performed on the size of the second picture according to the first size of the first picture, and the scaling processing is performed only on the size of the second picture, and is not performed on the image content in the second image.
It should be noted that, because the first picture and the second picture have different combining methods, the scaling processing can be performed on the size of the second picture by using different scaling processing methods. As an example, the size of the second picture may be calculated according to a combination manner of the first picture and the second picture and the first size of the first picture, so as to obtain a target size of the second picture. Optionally, as shown in fig. 2, the scaling process is performed on the size of the second picture according to the synthesis manner of the first picture and the second picture and the first size of the first picture to obtain the target size of the second picture, and the specific implementation process may be as follows:
step 201, determining a target synthesis mode for the first picture and the second picture.
In the embodiment of the present application, the target combination manner of the first picture and the second picture may include, but is not limited to, the second picture is spliced to the bottom edge or the top edge of the first picture, the second picture is spliced to the left side or the right side of the first picture, and the like.
Step 202, according to the target synthesis mode and the first size of the first picture, scaling the size of the second picture to obtain the target size of the second picture.
In the embodiment of the present application, the target synthesis manner is different, and the manner of obtaining the target size of the second picture by scaling the size of the second picture according to the target synthesis manner and the first size of the first picture is also different.
As an example, when the target synthesis manner is that the second picture is spliced to the bottom edge or the top edge of the first picture, the width value in the first size is taken as the width value of the target size; calculating a first scaling according to the width value of the second picture and the width value in the first size; calculating a height value of the target size according to the first scaling and the height value of the second picture; and determining the target size of the second picture according to the width value of the target size and the height value of the target size.
For example, taking the first picture as the character picture to be shared, taking the second picture as the two-dimensional code picture as an example, the target synthesis mode is that the two-dimensional code picture is spliced on the bottom edge or the top edge of the character picture to be shared, the width of the character picture to be shared can be used as the width value of the target size of the two-dimensional code picture, and then the first scaling can be obtained through the following formula:
the first scaling is the width value of the two-dimensional code picture/the width value of the figure picture to be shared;
in order to keep the two-dimensional code picture unchanged and keep the aspect ratio unchanged, the height of the two-dimensional code picture needs to be scaled by the same ratio. Therefore, the height value in the target size of the two-dimensional code picture can be obtained by the following formula:
the height value in the target size of the two-dimensional code picture is equal to the height value of the two-dimensional code picture size/a first scaling;
the target size of the two-dimensional code picture can be obtained by multiplying the width value of the person picture to be shared by the height value in the target size of the two-dimensional code picture.
As another example, when the target synthesis manner is that the second picture is stitched to the left or right of the first picture, the height value in the first size is taken as the height value of the target size; calculating a second scaling according to the height value of the second picture and the height value in the first size; calculating the width value of the target size according to the second scaling and the width value of the second picture; and determining the target size of the second picture according to the height value of the target size and the width value of the target size.
For example, taking the first picture as the character picture to be shared, the second picture as the two-dimensional code picture as an example, the target synthesis mode is that the two-dimensional code picture is spliced on the left side or the right side of the character picture to be shared, the height of the character picture to be shared can be used as the height value in the target size of the two-dimensional code picture, and then the second scaling can be obtained through the following formula:
the second scaling is the height value of the two-dimensional code picture/the height value of the person picture to be shared;
in order to keep the two-dimensional code picture unchanged and keep the aspect ratio unchanged, the width of the two-dimensional code picture needs to be scaled by the same ratio. Therefore, the width value in the target size of the two-dimensional code picture can be obtained by the following formula:
the width value of the two-dimensional code picture in the target size of the passing size is equal to the width value of the two-dimensional code picture/a second scaling;
the target size of the two-dimensional code picture can be obtained by multiplying the height value of the person picture to be shared by the width value in the target size of the two-dimensional code picture.
And 103, generating a blank picture according to the first size of the first picture and the target size of the second picture.
Optionally, according to the size of the first picture and the target size, determining the size of a blank picture to be generated, and generating a blank picture based on the size, so as to subsequently assign color information of pixel points on the first picture and the second picture to the blank picture to complete picture synthesis.
It should be noted that, due to the difference in the combining manners of the first picture and the second picture, the size of the blank picture to be generated is also different. Optionally, a blank picture is generated according to the target synthesis mode for the first picture and the second picture, the first size of the first picture, and the target size of the second picture.
As an example, when the target synthesis manner is that the second picture is spliced at the bottom edge or the top edge of the first picture, the width value of the blank picture to be generated may be determined according to the width value in the first size; determining the height value of a blank picture to be generated according to the height value in the first size and the height value in the target size; and generating a blank picture with a corresponding size according to the width value and the height value of the blank picture to be generated.
That is, when the target synthesis manner is that the second picture is spliced on the bottom edge or the top edge of the first picture, the width value in the first size can be used as the width value of the blank picture to be generated; the height value of the blank picture can be obtained by the following formula:
the height value of the blank picture is equal to the height value in the first size and the height value in the target size;
then, the size of the corresponding blank picture can be obtained by the following formula:
the size of the blank picture is the width value of the first size (height value of the first size + height value of the target size).
As another example, when the target synthesis manner is that the second picture is spliced on the left side or the right side of the first picture, the height value of the blank picture to be generated may be determined according to the height value in the first size; determining the width value of a blank picture to be generated according to the width value in the first size and the width value in the target size; and generating a blank picture with a corresponding size according to the height value and the width value of the blank picture to be generated.
That is, when the target synthesis manner is that the second picture is spliced on the left side or the right side of the first picture, the height value in the first size can be used as the height value of the blank picture to be generated; the width value of the blank picture can be obtained by the following formula:
the width value of the blank picture is equal to the width value in the first size and the width value in the target size;
then, the size of the corresponding blank picture can be obtained by the following formula:
the size of the blank picture is the height value of the first size (width value of the first size + width value of the target size).
And 104, determining the area range of the second picture in the blank picture according to the target size of the second picture.
It will be appreciated that when the second picture is stitched to the bottom or top edge of the first picture, the width of the blank picture corresponds to the width of the first size and the target size. Therefore, when the second picture is spliced to the bottom edge of the first picture, the area range of the target size of the second picture can be traversed from the lower left corner or the lower right corner of the blank picture, so that the area range of the second picture in the blank picture is determined; when the second picture is spliced on the top side of the first picture, the area range of the target size of the second picture can be traversed from the upper left corner or the upper right corner of the blank picture; thereby determining the region range of the second picture in the blank picture.
In addition, when the second picture is spliced on the left side or the right side of the first picture, the height value of the blank picture is consistent with the height values of the first size and the target size. Therefore, when the second picture is spliced on the left side of the first picture, the area range of the target size of the second picture can be traversed from the lower left corner or the upper left corner of the blank picture, so that the area range of the second picture in the blank picture is determined; when the second picture is spliced on the right side of the first picture, the area range of the target size of the second picture can be traversed from the lower right corner or the upper right corner of the blank picture, so that the area range of the second picture in the blank picture is determined.
And 105, extracting color information at a corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture.
Optionally, determining the coordinates of each pixel point in the area range; determining the coordinates of the pixel points at the corresponding positions in the second picture according to the coordinates of the pixel points in the region range; and extracting the color information of the corresponding pixel point from the second picture according to the coordinate of the pixel point at the corresponding position.
That is, when the target synthesis modes are different, the coordinates of each pixel point in the region range for correspondingly extracting the color information of the corresponding pixel point from the second picture are different. For example, when the target synthesis mode is that the second picture is spliced to the bottom edge or the left side of the first picture, assuming that the lower left corner coordinate of the blank picture is used as the origin of coordinates, the coordinates of the pixel points in the region range of the second picture in the blank picture can be used as the coordinates of the pixel points in the region range corresponding to the color information of the pixel points extracted from the second picture.
For another example, when the target synthesis mode is that the second picture is spliced to the top side of the first picture, assuming that the lower left corner coordinate of the blank picture is used as the origin of coordinates, the coordinates of each pixel point in the region range corresponding to the color information of the corresponding pixel point extracted from the second picture can be obtained through the following formula:
the abscissa of each pixel point in the area range is equal to the abscissa of the coordinate of the pixel point in the area range of the second picture in the blank picture;
and the vertical coordinate of each pixel point in the area range is the vertical coordinate of the pixel point in the area range of the second picture in the blank picture-the height value of the first size of the first picture.
For another example, when the target synthesis mode is that the second picture is spliced to the right side of the first picture, assuming that the lower left corner coordinate of the blank picture is used as the origin of coordinates, the coordinates of each pixel point in the region range corresponding to the color information of the pixel point extracted from the second picture can be obtained through the following formula:
the abscissa of each pixel point in the area range is equal to the abscissa of the coordinate of the pixel point in the area range of the second picture in the blank picture-the width value of the first size of the first picture
And the vertical coordinate of each pixel point in the area range is equal to the vertical coordinate of the pixel point in the area range of the second picture in the blank picture.
Then, in this embodiment of the application, the coordinates of the pixel point at the corresponding position in the second picture may be determined according to the coordinates of each pixel point in the area range, for example, the coordinates of the pixel point at the corresponding position in the second picture may be determined by the following formula:
the coordinates of the pixel points at the corresponding positions in the second picture are the coordinate of each pixel point in the area range/the size scaling of the second picture;
further, after the coordinates of the pixel points at the corresponding positions in the second picture are determined, a preset algorithm can be adopted to extract the color information of the corresponding pixel points in the second picture. The preset algorithm may be, but is not limited to, texture2d.
In this embodiment of the application, as shown in fig. 3, after extracting color information of a corresponding pixel point from a second picture, a target composite picture is generated according to the extracted color information, a first picture and a blank picture, which is specifically as follows:
and 301, assigning the extracted color information to a corresponding position of the blank picture according to the pixel point coordinates corresponding to the extracted color information.
In the embodiment of the application, the corresponding position of the color information in the blank picture can be determined according to the corresponding pixel point coordinates and the target synthesis mode, so that the extracted color information is assigned to the corresponding position of the blank picture. It should be noted that, the corresponding position of the color information in the blank picture is determined according to the corresponding pixel point coordinates and the synthesis mode, which may specifically refer to step 105, and is not described in detail herein.
Step 302, determining position information of the first picture in the blank picture according to the first size of the first picture.
In the embodiment of the application, when the target synthesis modes are different, the mode of determining the position information of the first picture in the blank picture is also different according to the first size of the first picture.
As an example, when the target synthesis manner is that the second picture is spliced above or on the right side of the first picture, assuming that the lower left corner of the blank picture is used as the origin of coordinates, the pixel point position can be traversed from the origin of coordinates, the size of the traversal region is the first size of the first picture, and the pixel point coordinates of the traversal region are used as the pixel point coordinates corresponding to the first picture.
As another example, when the target synthesis mode is that the second picture is spliced below the first picture, it is assumed that the lower left corner of the blank picture is taken as the origin of coordinates, the pixel position can be traversed from the (0, target size height value) position to the right and upward, the size of the traversal region is the first size of the first picture, the abscissa of the pixel in the traversal region is taken as the abscissa of the pixel corresponding to the first picture, and the difference between the ordinate of the pixel in the traversal region and the target size height value is taken as the ordinate of the pixel corresponding to the first picture.
As another example, when the target synthesis manner is that the second picture is spliced on the left side of the first picture, assuming that the lower left corner of the blank picture is taken as the origin of coordinates, the pixel point position can be traversed from the (target size width value, 0) position to the right and upward, the traversal region size is the first size of the first picture, the difference value between the abscissa of the pixel point of the traversal region and the width value of the target size is taken as the abscissa of the pixel point corresponding to the first picture, and the ordinate of the pixel point of the traversal region is taken as the ordinate of the pixel point corresponding to the first picture.
Step 303, assigning color information at a corresponding position in the first picture to the blank picture according to the position information of the first picture in the blank picture to obtain a target synthesized picture.
Further, after the pixel point coordinates corresponding to the first picture are obtained, a preset algorithm can be adopted to extract the color information of the corresponding pixel point from the first picture. And then, according to the pixel point coordinates of the first picture and the target synthesis mode, determining the position information of the first picture in the blank picture, and assigning the color information at the corresponding position in the first picture to the blank picture to obtain a target synthesis picture. It should be noted that, the position information of the first picture in the blank picture is determined according to the pixel point coordinates of the first picture and the target synthesis mode, which may be specifically referred to in step 302, and is not described in detail herein.
According to the picture synthesis method, a first picture and a second picture to be synthesized are obtained; calculating to obtain the target size of a second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at the corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture. The method comprises the steps of carrying out scaling processing on the size of a second picture according to the size of a first picture to obtain a target size, generating a blank picture according to the size of the first picture and the target size, assigning the second picture and color information of the first picture to the blank picture to obtain a synthetic picture, and therefore, the picture synthesis mode can avoid the problem that the second picture with the same content needs to be manufactured into different sizes to meet the requirements of various synthetic scenes, does not need to be repeatedly drawn, reduces the occupation of resource memory, reduces the coupling performance, reduces the workload of picture replacement during picture synthesis in various synthetic scenes, and avoids operation errors caused during picture replacement.
Corresponding to the picture synthesis methods provided by the above embodiments, an embodiment of the present application further provides a picture synthesis apparatus, and since the picture synthesis apparatus provided by the embodiment of the present application corresponds to the picture synthesis methods provided by the above embodiments, the embodiments of the picture synthesis method are also applicable to the picture synthesis apparatus provided by the embodiment, and are not described in detail in the embodiment. Fig. 4 is a schematic structural diagram of a picture synthesis apparatus according to an embodiment of the present application. As shown in fig. 4, the picture synthesis apparatus includes: a picture acquisition module 410, a size scaling module 420, a picture generation module 430, a region range determination module 440, a color information extraction module 450, and a picture composition module 460.
The image obtaining module 440 is configured to obtain a first image and a second image to be synthesized; a size scaling module 420, configured to calculate a target size of the second picture according to the first size of the first picture; the picture generation module 430 is configured to generate a blank picture according to the first size of the first picture and the target size of the second picture; the area range determining module 440 is configured to determine an area range of the second picture in the blank picture according to the target size of the second picture; a color information extracting module 450, configured to extract color information at a corresponding position from the second picture according to the region range; and a picture synthesis module 460, configured to generate a target synthesis picture according to the extracted color information, the first picture, and the blank picture.
As a possible implementation manner of the embodiment of the present application, as shown in fig. 5, on the basis of fig. 4, the size scaling module 420 includes: a composition mode determining unit 421 and a size scaling unit 422.
A composition mode determining unit 421, configured to determine a target composition mode for the first picture and the second picture; a size scaling unit 422, configured to calculate a size of the second picture according to the target synthesis manner and the first size of the first picture to obtain a target size of the second picture; the picture generation module 410 is specifically configured to: and generating a blank picture according to the target synthesis mode, the first size of the first picture and the target size of the second picture.
As a possible implementation manner of the embodiment of the present application, the size scaling unit 422 is specifically configured to: when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, taking the width value in the first size as the width value of the target size; calculating a first scaling according to the width value of the second picture and the width value in the first size; calculating a height value of the target size according to the first scaling and the height value of the second picture; determining the target size of the second picture according to the width value of the target size and the height value of the target size; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, taking the height value in the first size as the height value of the target size; calculating a second scaling according to the height value of the second picture and the height value in the first size; calculating the width value of the target size according to the second scaling and the width value of the second picture; and determining the target size of the second picture according to the height value of the target size and the width value of the target size.
As a possible implementation manner of the embodiment of the present application, the picture generation module 430 is specifically configured to: when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, determining the width value of a blank picture to be generated according to the width value in the first size; determining the height value of a blank picture to be generated according to the height value in the first size and the height value in the target size; generating a blank picture with a corresponding size according to the width value and the height value of the blank picture to be generated; when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value of a blank picture to be generated according to the height value in the first size; determining the width value of a blank picture to be generated according to the width value in the first size and the width value in the target size; and generating a blank picture with a corresponding size according to the height value and the width value of the blank picture to be generated.
As a possible implementation manner of the embodiment of the present application, the color information extraction module 450 is specifically configured to: determining the coordinates of each pixel point in the area range; determining the coordinates of the pixel points at the corresponding positions in the second picture according to the coordinates of the pixel points in the region range; and extracting the color information of the corresponding pixel point from the second picture according to the coordinate of the pixel point at the corresponding position.
As a possible implementation manner of the embodiment of the present application, the picture synthesis module is specifically configured to: assigning the extracted color information to a corresponding position of the blank picture according to the pixel point coordinates corresponding to the extracted color information; determining the position information of the first picture in the blank picture according to the first size of the first picture; and assigning the color information at the corresponding position in the first picture to the blank picture according to the position information of the first picture in the blank picture to obtain the target synthetic picture.
The picture synthesis device of the embodiment of the application obtains a first picture and a second picture to be synthesized; calculating to obtain the target size of a second picture according to the first size of the first picture; generating a blank picture according to the first size of the first picture and the target size of the second picture; determining the area range of the second picture in the blank picture according to the target size of the second picture; and extracting color information at the corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture. The device can realize that the size of the second picture is zoomed according to the size of the first picture to obtain the target size, a blank picture is generated according to the size of the first picture and the target size, the second picture and the color information of the first picture are assigned to the blank picture to obtain a synthesized picture, so that the picture synthesizing mode can avoid that different sizes are required to be manufactured for the second picture with the same content to meet the requirements of various synthesized scenes, repeated drawing is not needed, the occupation of resource memory is reduced, the coupling is reduced, the workload of continuously replacing pictures during picture synthesis in various synthesized scenes is reduced, and misoperation caused when the pictures are replaced is avoided.
In order to implement the above embodiments, the present application further provides an electronic device. Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes: memory 1001, processor 1002, and computer programs stored on memory 1001 and executable on processor 1002.
The processor 1002, when executing the program, implements the picture synthesis method provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 1003 for communicating between the memory 1001 and the processor 1002.
A memory 1001 for storing computer programs that may be run on the processor 1002.
Memory 1001 may include high-speed RAM memory and may also include non-volatile memory (e.g., at least one disk memory).
The processor 1002 is configured to implement the picture synthesis method according to the foregoing embodiment when executing the program.
If the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the communication interface 1003, the memory 1001, and the processor 1002 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on one chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through an internal interface.
The processor 1002 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
In order to implement the foregoing embodiments, the present application further proposes a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the picture synthesis method according to the foregoing embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A picture synthesis method, comprising:
acquiring a first picture and a second picture to be synthesized;
calculating to obtain a target size of the second picture according to the first size of the first picture;
generating a blank picture according to the first size of the first picture and the target size of the second picture;
determining the area range of the second picture in the blank picture according to the target size of the second picture;
and extracting color information at a corresponding position from the second picture according to the area range, and generating a target synthetic picture according to the extracted color information, the first picture and the blank picture.
2. The method according to claim 1, wherein calculating the target size of the second picture according to the first size of the first picture comprises:
determining a target synthesis mode for the first picture and the second picture;
calculating the size of the second picture according to the target synthesis mode and the first size of the first picture to obtain the target size of the second picture;
generating a blank picture according to the first size of the first picture and the target size of the second picture, wherein the generating of the blank picture comprises the following steps:
and generating a blank picture according to the target synthesis mode, the first size of the first picture and the target size of the second picture.
3. The picture synthesis method according to claim 2, wherein calculating the size of the second picture according to the target synthesis mode and the first size of the first picture to obtain the target size of the second picture comprises:
when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, taking the width value in the first size as the width value of the target size;
calculating a first scaling according to the width value of the second picture and the width value in the first size;
calculating a height value of the target size according to the first scaling and the height value of the second picture;
determining the target size according to the width value of the target size and the height value of the target size;
when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, taking the height value in the first size as the height value of the target size;
calculating a second scaling according to the height value of the second picture and the height value in the first size;
calculating the width value of the target size according to the second scaling and the width value of the second picture;
and determining the target size of the second picture according to the height value of the target size and the width value of the target size.
4. The picture synthesis method according to claim 2, wherein generating a blank picture according to the target synthesis mode, the first size of the first picture, and the target size of the second picture comprises:
when the target synthesis mode is that the second picture is spliced at the bottom edge or the top edge of the first picture, determining the width value of a blank picture to be generated according to the width value in the first size;
determining the height value of the blank picture to be generated according to the height value in the first size and the height value in the target size;
generating a blank picture with a corresponding size according to the width value and the height value of the blank picture to be generated;
when the target synthesis mode is that the second picture is spliced on the left side or the right side of the first picture, determining the height value of a blank picture to be generated according to the height value in the first size;
determining the width value of the blank picture to be generated according to the width value in the first size and the width value in the target size;
and generating a blank picture with a corresponding size according to the height value and the width value of the blank picture to be generated.
5. The picture synthesis method according to claim 1, wherein extracting color information at a corresponding position from the second picture according to the region range comprises:
determining the coordinates of each pixel point in the area range;
determining the coordinates of the pixel points at the corresponding positions in the second picture according to the coordinates of the pixel points in the area range;
and extracting the color information of the corresponding pixel point from the second picture according to the coordinate of the pixel point at the corresponding position.
6. The picture synthesis method according to any one of claims 1 to 5, wherein the generating a target synthesis picture from the extracted color information, the first picture and the blank picture comprises:
assigning the extracted color information to the corresponding position of the blank picture according to the pixel point coordinate corresponding to the extracted color information;
determining the position information of the first picture in the blank picture according to the first size of the first picture;
and assigning color information at a corresponding position in the first picture to the blank picture according to the position information of the first picture in the blank picture to obtain the target synthetic picture.
7. A picture composition apparatus, comprising:
the picture acquisition module is used for acquiring a first picture and a second picture to be synthesized;
the size scaling module is used for calculating the target size of the second picture according to the first size of the first picture;
the picture generation module is used for generating a blank picture according to the first size of the first picture and the target size of the second picture;
the area range determining module is used for determining the area range of the second picture in the blank picture according to the target size of the second picture;
the color information extraction module is used for extracting color information on a corresponding position from the second picture according to the region range;
and the picture synthesis module is used for generating a target synthesis picture according to the extracted color information, the first picture and the blank picture.
8. The picture synthesis apparatus according to claim 7, wherein the size scaling module comprises:
a synthesis mode determination unit configured to determine a target synthesis mode for the first picture and the second picture;
the size scaling unit is used for calculating the size of the second picture according to the target synthesis mode and the first size of the first picture so as to obtain the target size of the second picture;
the picture generation module is specifically configured to:
and generating a blank picture according to the target synthesis mode, the first size of the first picture and the target size of the second picture.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the picture synthesis method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the picture synthesis method according to any one of claims 1 to 6.
CN201911309794.9A 2019-12-18 2019-12-18 Picture synthesis method, device, electronic equipment and storage medium Active CN111063001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911309794.9A CN111063001B (en) 2019-12-18 2019-12-18 Picture synthesis method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911309794.9A CN111063001B (en) 2019-12-18 2019-12-18 Picture synthesis method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111063001A true CN111063001A (en) 2020-04-24
CN111063001B CN111063001B (en) 2023-11-10

Family

ID=70302298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911309794.9A Active CN111063001B (en) 2019-12-18 2019-12-18 Picture synthesis method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111063001B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419802A (en) * 2021-06-21 2021-09-21 网易(杭州)网络有限公司 Atlas generation method and apparatus, electronic device and storage medium

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609661A (en) * 2008-06-19 2009-12-23 富士施乐株式会社 Information display device and method for information display
KR20110054674A (en) * 2009-11-18 2011-05-25 전제봉 Method for producing moving picture of composite user image
CN102831593A (en) * 2012-07-23 2012-12-19 陈华 Digital picture splicing system and method for carrying out mosaic picture splicing by using system
CN102831568A (en) * 2012-08-03 2012-12-19 网易(杭州)网络有限公司 Method and device for generating verification code picture
CN102956036A (en) * 2011-08-30 2013-03-06 中国电信股份有限公司 Image processing method and device
CN103797787A (en) * 2012-09-10 2014-05-14 华为技术有限公司 Image processing method and image processing device
US20140149855A1 (en) * 2010-10-21 2014-05-29 Uc Mobile Limited Character Segmenting Method and Apparatus for Web Page Pictures
CN104103085A (en) * 2013-04-11 2014-10-15 三星电子株式会社 Objects in screen images
WO2014169653A1 (en) * 2013-08-28 2014-10-23 中兴通讯股份有限公司 Method and device for optimizing image synthesis
US20150154776A1 (en) * 2013-12-03 2015-06-04 Huawei Technologies Co., Ltd. Image splicing method and apparatus
US20160063672A1 (en) * 2014-08-29 2016-03-03 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Electronic device and method for generating thumbnail picture
CN105487766A (en) * 2015-11-24 2016-04-13 努比亚技术有限公司 Picture capture method and apparatus
CN105719240A (en) * 2016-01-21 2016-06-29 腾讯科技(深圳)有限公司 Method and apparatus for picture processing
CN108595239A (en) * 2018-04-18 2018-09-28 腾讯科技(深圳)有限公司 image processing method, device, terminal and computer readable storage medium
US20180350035A1 (en) * 2017-05-31 2018-12-06 International Business Machines Corporation Dynamic picture sizing based on user access criteria
CN108959303A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of exhibiting pictures generate, layout generation method and data processing server
CN109146783A (en) * 2018-07-23 2019-01-04 北京金山安全软件有限公司 Picture splicing method and device, electronic equipment and medium
CN109271619A (en) * 2018-08-31 2019-01-25 平安科技(深圳)有限公司 Mail pattern processing method, device, computer equipment and storage medium
CN109785229A (en) * 2019-01-11 2019-05-21 百度在线网络技术(北京)有限公司 Intelligence group photo method, apparatus, equipment and the medium realized based on block chain
CN109947972A (en) * 2017-10-11 2019-06-28 腾讯科技(深圳)有限公司 Reduced graph generating method and device, electronic equipment, storage medium
CN110266942A (en) * 2019-06-03 2019-09-20 Oppo(重庆)智能科技有限公司 The synthetic method and Related product of picture
CN110490808A (en) * 2019-08-27 2019-11-22 腾讯科技(深圳)有限公司 Picture joining method, device, terminal and storage medium
CN110516211A (en) * 2019-08-30 2019-11-29 上海互盾信息科技有限公司 A kind of method that Word document is converted to long picture

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609661A (en) * 2008-06-19 2009-12-23 富士施乐株式会社 Information display device and method for information display
KR20110054674A (en) * 2009-11-18 2011-05-25 전제봉 Method for producing moving picture of composite user image
US20140149855A1 (en) * 2010-10-21 2014-05-29 Uc Mobile Limited Character Segmenting Method and Apparatus for Web Page Pictures
CN102956036A (en) * 2011-08-30 2013-03-06 中国电信股份有限公司 Image processing method and device
CN102831593A (en) * 2012-07-23 2012-12-19 陈华 Digital picture splicing system and method for carrying out mosaic picture splicing by using system
CN102831568A (en) * 2012-08-03 2012-12-19 网易(杭州)网络有限公司 Method and device for generating verification code picture
CN103797787A (en) * 2012-09-10 2014-05-14 华为技术有限公司 Image processing method and image processing device
CN104103085A (en) * 2013-04-11 2014-10-15 三星电子株式会社 Objects in screen images
WO2014169653A1 (en) * 2013-08-28 2014-10-23 中兴通讯股份有限公司 Method and device for optimizing image synthesis
CN104424624A (en) * 2013-08-28 2015-03-18 中兴通讯股份有限公司 Image synthesis optimization method and device
US20150154776A1 (en) * 2013-12-03 2015-06-04 Huawei Technologies Co., Ltd. Image splicing method and apparatus
US20160063672A1 (en) * 2014-08-29 2016-03-03 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Electronic device and method for generating thumbnail picture
CN105487766A (en) * 2015-11-24 2016-04-13 努比亚技术有限公司 Picture capture method and apparatus
CN105719240A (en) * 2016-01-21 2016-06-29 腾讯科技(深圳)有限公司 Method and apparatus for picture processing
CN108959303A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of exhibiting pictures generate, layout generation method and data processing server
US20180350035A1 (en) * 2017-05-31 2018-12-06 International Business Machines Corporation Dynamic picture sizing based on user access criteria
CN109947972A (en) * 2017-10-11 2019-06-28 腾讯科技(深圳)有限公司 Reduced graph generating method and device, electronic equipment, storage medium
CN108595239A (en) * 2018-04-18 2018-09-28 腾讯科技(深圳)有限公司 image processing method, device, terminal and computer readable storage medium
CN109146783A (en) * 2018-07-23 2019-01-04 北京金山安全软件有限公司 Picture splicing method and device, electronic equipment and medium
CN109271619A (en) * 2018-08-31 2019-01-25 平安科技(深圳)有限公司 Mail pattern processing method, device, computer equipment and storage medium
CN109785229A (en) * 2019-01-11 2019-05-21 百度在线网络技术(北京)有限公司 Intelligence group photo method, apparatus, equipment and the medium realized based on block chain
CN110266942A (en) * 2019-06-03 2019-09-20 Oppo(重庆)智能科技有限公司 The synthetic method and Related product of picture
CN110490808A (en) * 2019-08-27 2019-11-22 腾讯科技(深圳)有限公司 Picture joining method, device, terminal and storage medium
CN110516211A (en) * 2019-08-30 2019-11-29 上海互盾信息科技有限公司 A kind of method that Word document is converted to long picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
成晓强;杨敏;桂志鹏;艾廷华;吴华意;: "信息量与相似度约束下的网络地图服务缩略图自动生成算法", 测绘学报, no. 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419802A (en) * 2021-06-21 2021-09-21 网易(杭州)网络有限公司 Atlas generation method and apparatus, electronic device and storage medium
CN113419802B (en) * 2021-06-21 2022-08-05 网易(杭州)网络有限公司 Atlas generation method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
CN111063001B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN108550101B (en) Image processing method, device and storage medium
KR20070093995A (en) Motion vector calculation method, hand-movement correction device using the method, imaging device, and motion picture generation device
JP6663285B2 (en) Image generation method and image generation system
KR101579873B1 (en) Image processing apparatus, image processing method, and computer readable medium
US20120280996A1 (en) Method and system for rendering three dimensional views of a scene
KR20120114153A (en) Image processing apparatus, image processing method, and computer readable medium
CN111127543B (en) Image processing method, device, electronic equipment and storage medium
US9292732B2 (en) Image processing apparatus, image processing method and computer program product
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN110176010A (en) A kind of image detecting method, device, equipment and storage medium
CN112347292A (en) Defect labeling method and device
CN111292335B (en) Method and device for determining foreground mask feature map and electronic equipment
JP5676610B2 (en) System and method for artifact reduction based on region of interest of image sequence
CN114520894A (en) Projection area determining method and device, projection equipment and readable storage medium
CN111063001A (en) Picture synthesis method and device, electronic equipment and storage medium
CN114445555A (en) Shoe tree modeling adjustment method, device, equipment and storage medium
JP2010147937A (en) Image processing apparatus
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
JP6546385B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
CN110197228B (en) Image correction method and device
CN114463477A (en) Model mapping method and device and electronic equipment
US7330589B2 (en) Image partitioning apparatus and method
JP2011053456A (en) Image display method, program, image display device, and imaging apparatus with the image display device
CN111524086A (en) Moving object detection device, moving object detection method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant