CN111372009A - Image processing method and processing equipment - Google Patents

Image processing method and processing equipment Download PDF

Info

Publication number
CN111372009A
CN111372009A CN202010245500.7A CN202010245500A CN111372009A CN 111372009 A CN111372009 A CN 111372009A CN 202010245500 A CN202010245500 A CN 202010245500A CN 111372009 A CN111372009 A CN 111372009A
Authority
CN
China
Prior art keywords
image
combined display
display area
parameter
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010245500.7A
Other languages
Chinese (zh)
Other versions
CN111372009B (en
Inventor
董芳菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010245500.7A priority Critical patent/CN111372009B/en
Publication of CN111372009A publication Critical patent/CN111372009A/en
Application granted granted Critical
Publication of CN111372009B publication Critical patent/CN111372009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The application provides an image processing method and processing equipment, wherein the method comprises the following steps: determining a combined display parameter of the combined display area; wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; determining a first image to be displayed in the combined display area; if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image, wherein the second image and the combined display area meet the matching condition; thereby enabling the combined display area to present a better display effect.

Description

Image processing method and processing equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and an image processing apparatus.
Background
Currently, many devices have a display function, such as a device capable of displaying an image. The existing equipment independently displays images through one display unit when displaying the images, and whether the images are matched with the display unit or not is not considered, so that the equipment cannot present a perfect display effect.
Disclosure of Invention
In view of the above, the present application provides an image processing method and a processing device to solve the above technical problem.
In order to achieve the above purpose, the present application provides the following technical solutions:
an image processing method comprising:
determining a combined display parameter of the combined display area; wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area;
determining a first image to be displayed in the combined display area;
and if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image, wherein the second image and the combined display area meet the matching condition.
Optionally, the method further includes:
controlling the second image to be displayed in a combined display area formed by a display unit of the first device and an extended display unit, wherein the extended display unit is positioned on a second device connected with the first device;
or, the second image is divided to generate at least a first partial image and a second partial image, and at least the second partial image is transmitted to a second device;
wherein dividing the second image to generate at least the first partial image and a second partial image, and transmitting at least the second partial image to a second device comprises:
dividing the second image to generate at least a first partial image and a second partial image, controlling a first display unit of a first device to display the first partial image, and transmitting at least the second partial image to a second device so that a second display unit of the second device displays the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit;
or, the second image is divided to generate at least a first partial image and a second partial image, at least the first partial image is transmitted to a first device to cause a first display unit of the first device to display the first partial image, and at least the second partial image is transmitted to a second device to cause a second display unit of the second device to display the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit.
Optionally, the combined display parameter includes a size parameter; the method further comprises the following steps:
obtaining an image size parameter of the first image;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises:
if the image size parameter does not match the size parameter, cropping or filling the first image based on the image size parameter and the size parameter to generate a second image;
the image size parameter is a proportion parameter of a first edge and a second edge of the first image, and the size parameter is a proportion parameter of the first edge and the second edge of the combined display area;
the filling is used for representing that a first edge of the first image corresponds to a first edge of the combined display area and filling a second edge of the first image with images;
and the cropping is used for representing that the second edge of the first image corresponds to the second edge of the combined display area, and the first edge of the first image is subjected to image cropping.
Optionally, the method further includes:
identifying a primary content feature in the first image;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises: and if the main content features and the combined display area do not meet the matching condition, processing the first image according to the combined display parameters at least to obtain a second image.
Optionally, the processing the first image to obtain a second image according to at least the combined display parameter includes:
magnifying the first image based on at least the combined display parameter to generate a third image;
clipping the third image to generate a second image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
cropping the first image based on at least the combined display parameter, generating a third image;
filling the third image to generate a second image; wherein the cropped portion of the first image is a different portion than the padded portion of the third image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
reducing the first image based on at least the combined display parameter to generate a third image;
filling the third image to generate a second image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
magnifying the first image based on at least the combined display parameter to generate a third image;
cropping the third image to generate a fourth image;
filling the fourth image to generate a second image; wherein the cropped portion of the third image is a different portion than the filled portion of the fourth image.
Optionally, the processing makes the main content features all located in one sub-display region, or the main content features are symmetrically displayed according to a combined display position of at least two sub-display regions in the combined display region.
Optionally, the combined display parameter includes a size parameter, and the method further includes:
obtaining an image size parameter of the first image;
judging whether the image size parameter and the size parameter meet a matching condition;
if not, judging whether the main content features and the combined display area meet matching conditions;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises:
if the image size parameter and the size parameter do not satisfy the matching condition, the main content feature and the combined display area do not satisfy the matching condition, the first image is processed at least according to the combined display parameter to obtain a second image, the image size parameter and the size parameter of the second image satisfy the matching condition, and the main content feature and the combined display area of the second image satisfy the matching condition.
A processing device, comprising:
a memory for storing a program;
a processor running the program for determining a combined display parameter of a combined display area, determining a first image to be displayed in the combined display area, and processing the first image according to the combined display parameter at least to obtain a second image if the first image and the combined display area do not satisfy a matching condition; wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; the second image and the combined display area satisfy a matching condition.
Optionally, the processing device is a first device, and the first device further includes:
a first display unit and a communication unit;
the communication unit is used for communicating with a second device, the second device is provided with a second display unit, and the second display unit is used as an expansion display unit of the first device;
the processor is further configured to control the first image to be displayed in a combined display area formed by the first display unit and the second display unit.
Optionally, the method further includes:
a communication unit;
the processor is further configured to segment the second image to generate at least the first partial image and the second partial image, and transmit at least the second partial image to a second device via the communication unit;
wherein the content of the first and second substances,
if the processing device is a first device, the processor is specifically configured to segment the second image to generate at least a first partial image and a second partial image, control a first display unit of the first device to display the first partial image, and send the second partial image to a second device at least through the communication unit, so that a second display unit of the second device displays the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit;
if the processing device is a server, the processor is specifically configured to segment the second image to generate at least a first partial image and a second partial image, transmit the first partial image to a first device at least through the communication unit so as to cause a first display unit of the first device to display the first partial image, and transmit the second partial image to a second device at least through the communication unit so as to cause a second display unit of the second device to display the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit.
As can be seen from the above technical solution, the present application provides an image processing method, determining a first image to be displayed in a combined display area by determining a combined display parameter of the combined display area, where the combined display parameter at least includes a position parameter and/or a size parameter of at least two sub-display areas constituting the combined display area, and if the first image and the combined display area do not satisfy a matching condition, processing the first image according to the combined display parameter at least to obtain a second image, so that the second image and the combined display area satisfy the matching condition, thereby it can be seen that, in the present application, the combined display area includes at least two sub-display areas, and in a case that the first image and the combined display area do not match, processing the first image to obtain the second image, so that the second image and the combined display area match, so that the combined display area can present better display effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of an image processing method according to a third embodiment of the present application;
FIG. 4a is a diagram of a combined display area provided in a third embodiment of the method of the present application;
FIG. 4b is a schematic diagram of a segmentation method of a second image according to a third embodiment of the present disclosure;
FIG. 5a is a schematic diagram of a dimension parameter of a combined display area according to a third embodiment of the present application;
FIG. 5b is a schematic diagram illustrating a segmentation method of a second image according to a third embodiment of the present disclosure;
fig. 6 is a schematic flowchart of an image processing method according to a fourth embodiment of the present application;
FIG. 7a is a diagram of a combined display area provided in a fourth embodiment of the method of the present application;
fig. 7b is a schematic diagram of filling a second edge of the first image according to the fourth embodiment of the present application;
fig. 7c is a schematic diagram of filling two second edges of the first image according to the fourth embodiment of the present application;
FIG. 7d is a diagram illustrating cropping a first edge of a first image according to a fourth embodiment of the present disclosure;
fig. 7e is a schematic diagram of clipping two first edges of the first image according to the fourth embodiment of the present application;
FIG. 8a is a schematic diagram of cropping a first image according to a fourth embodiment of the present disclosure;
FIG. 8b is a schematic diagram of a second image according to a fourth embodiment of the present disclosure;
FIG. 8c is a schematic diagram of a segmentation of a second image according to a fourth embodiment of the present disclosure;
FIG. 8d is a diagram illustrating a second image displayed on the combined display area according to a fourth embodiment of the present disclosure;
fig. 9 is a schematic flowchart of an image processing method according to a fifth embodiment of the present application;
FIG. 10a is a diagram of a combined display area provided in a fifth embodiment of the method of the present application;
FIG. 10b is a schematic diagram of the cropping of the first image provided in the fifth embodiment of the method of the present application;
FIG. 10c is a schematic diagram of a second image provided in a fifth embodiment of the method of the present application;
FIG. 10d is a schematic diagram of a fifth embodiment of the segmentation method for the second image;
fig. 10e is a schematic diagram of a third image according to an embodiment of the present invention;
fig. 10f is a schematic diagram of segmenting a second image according to a fifth embodiment of the present application;
fig. 10g is a schematic diagram of displaying a second image in a combined display area according to a fifth embodiment of the present application;
FIG. 11a is a diagram of a combined display area provided in a fifth embodiment of the method of the present application;
FIG. 11b is a schematic diagram of the clipping of the first image according to the fifth embodiment of the present application;
FIG. 11c is a schematic diagram of a second image according to a fifth embodiment of the present disclosure;
fig. 11d is a schematic diagram of segmenting a second image according to a fifth embodiment of the present application;
fig. 11e is a schematic diagram of a third image according to a fifth embodiment of the present disclosure;
fig. 11f is a schematic diagram of displaying a second image in a combined display area according to a fifth embodiment of the present application;
fig. 12 is a schematic flowchart of an image processing method according to a sixth embodiment of the present application;
FIG. 13 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a processing apparatus according to a second embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a processing apparatus according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of the present application provides an image processing method, as shown in fig. 1, the method includes the following steps:
step 101: determining a combined display parameter of the combined display area;
wherein the combined display parameter at least comprises a position parameter and/or a size parameter of at least two sub-display areas constituting the combined display area.
The different sub-display regions may be different display regions of one display unit, that is, the combined display region may include a combined display region formed by at least two sub-display regions of one display unit. Alternatively, one sub display region may be a display region of one display unit, that is, the combined display region may include a combined display region composed of at least two display units.
The position parameter is used for representing the arrangement position relationship of at least two sub-display areas forming the combined display area, if the sub-display area A and the sub-display area B are arranged in a left-right mode, the position parameter can represent the position relationship that the sub-display areas A and B are arranged in a left-right mode, and if the sub-display area A and the sub-display area B are arranged in an upper limit mode, the position parameter can represent the position relationship that the sub-display areas A and B are arranged in a top-bottom mode.
The size parameter is used to characterize the size of the combined display area and/or the size of each of at least two sub-display areas that make up the combined display area. The size parameter may be a ratio parameter of two sides in the combined display area or the sub-display area, or may be an area parameter of the combined display area or the sub-display area.
Step 102: determining a first image to be displayed in the combined display area;
the first image is an image that needs to be displayed in the combined display area but is not currently displayed, and the manner of acquiring the first image is not limited in this application, and may be acquired from a locally stored image library, or acquired from a network, or received from another device, etc.
Step 103: and if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image.
Wherein the second image and the combined display area satisfy a matching condition.
It can be seen that, in the present embodiment, by determining the combined display parameters of the combined display area, where the combined display parameters at least include the position parameters and/or the size parameters of at least two sub-display areas constituting the combined display area, the first image to be displayed in the combined display area is determined, if the first image and the combined display area do not satisfy the matching condition, processing the first image according to at least the combined display parameter to obtain a second image, so that the second image and the combined display area satisfy the matching condition, that is, in the present application, the combined display area includes at least two sub-display areas, and in the case where the first image does not match the combined display area, the first image is processed to obtain the second image, so that the second image is matched with the combined display area, and the combined display area can present better display effect.
An embodiment of the method of the present application provides an image processing method to mainly describe a display manner of a second image, as shown in fig. 2, the method includes the following steps:
step 201: determining a combined display parameter of the combined display area;
wherein the combined display parameter at least comprises a position parameter and/or a size parameter of at least two sub-display areas constituting the combined display area.
Step 202: determining a first image to be displayed in the combined display area;
step 203: if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image;
wherein the second image and the combined display region satisfy a matching condition.
Step 204: and controlling the second image to be displayed in a combined display area formed by the display unit and the extended display unit of the first device.
Wherein the extended display unit is located at a second device connected to the first device. It is understood that an image processing method in the present application is applied to a processing apparatus, and in the present embodiment, the processing apparatus is a first apparatus. The first device has a display unit and an extended display unit, the extended display unit being located in a second device connected to the first device. Then, the display unit and the extended display unit are used as at least two sub-display regions constituting a combined display region, and the first device is capable of controlling the second image to be displayed in the display unit and the extended display unit, wherein the display refers to that the display unit and the extended display unit display one second image together.
In this embodiment, the extended display unit includes one or more.
Therefore, in the embodiment, the combined display area is at least composed of the display unit of the first device and the extended display unit, and in the case that the first image does not match with the combined display area, the first image is processed to obtain the second image, so that the second image matches with the combined display area, and the combined display area can present a better display effect.
A third embodiment of the method of the present application provides an image processing method to mainly describe another display mode of a second image, and as shown in fig. 3, the method includes the following steps:
step 301: determining a combined display parameter of the combined display area;
wherein the combined display parameter at least comprises a position parameter and/or a size parameter of at least two sub-display areas constituting the combined display area.
Step 302: determining a first image to be displayed in the combined display area;
step 303: if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image;
wherein the second image and the combined display region satisfy a matching condition.
Step 304: the second image is segmented to generate at least a first partial image and a second partial image, and at least the second partial image is transmitted to a second device.
It is understood that an image processing method in the present application is applied to a processing device, and in the present embodiment, the processing device is capable of transmitting at least a second partial image to a second device, which is independent of the processing device.
In the first aspect of this embodiment, the dividing the second image to generate at least the first partial image and the second partial image, and transmitting at least the second partial image to the second device may include:
the second image is divided to generate at least a first partial image and a second partial image, a first display unit of a first device is controlled to display the first partial image, and at least the second partial image is transmitted to a second device so that a second display unit of the second device displays the second partial image.
Wherein the combined display area is constituted by at least the first display unit and the second display unit. That is, the first display unit and the second display unit respectively serve as sub display regions of the combined display region.
In a first mode, the processing device may be a first device and may be capable of communicating with a second device, thereby being capable of transmitting at least the second partial image to the second device, the second device being independent of the first device.
In one approach, a segmentation may be determined based on the position parameters of the at least two sub-display regions, such that the second image is segmented based on the determined segmentation to generate at least a first partial image and a second partial image. For example, as shown in fig. 4a, the first display unit X1, the second display unit Y1 and the second display unit Y2 in the combined display area are arranged in a landscape manner, based on the positional relationship, the division manner of the second image P2 as shown in fig. 4b may be determined, thereby controlling the first display unit X1 to display the first partial image P21, and transmitting the second partial image P22 and the second partial image P23 to the second device, respectively, so that the second display unit Y1 displays the second partial image P22 and the second display unit Y2 displays the second partial image P23.
Alternatively, the division may be determined based on a size parameter, and the second image may be divided based on the determined division to generate at least the first partial image and the second partial image. Wherein the size parameter may be a size of each of at least two sub-display regions constituting the combined display region. For example, as shown in fig. 5a, in the combined display area, the size parameter of the first display unit X is 9:16, and the size parameter of the second display unit Y1 is 9:16, based on which the division manner of the second image P2 as shown in fig. 5b can be determined, wherein the size parameter of the first partial image P21 is 9:16 and the size parameter of the second partial image P22 is 9:16, thereby controlling the first display unit X1 to display the first partial image P21, and transmitting the second partial image P22 to the second device, so that the second display unit Y1 displays the second partial image P22.
In yet another approach, a segmentation may be determined based on a position parameter and a size parameter, which may be a size parameter of the combined display area, such that segmenting the second image based on the determined segmentation generates at least a first partial image and a second partial image.
In a second aspect of this embodiment, a method for dividing the second image to generate at least the first partial image and the second partial image and transmitting at least the second partial image to a second device includes:
the second image is divided to generate at least a first partial image and a second partial image, at least the first partial image is transmitted to a first device to cause a first display unit of the first device to display the first partial image, and at least the second partial image is transmitted to a second device to cause a second display unit of the second device to display the second partial image.
Wherein the combined display area is constituted by at least the first display unit and the second display unit. That is, the first display unit and the second display unit respectively serve as sub display regions of the combined display region.
In the second mode, the processing device may be a server and may be capable of communicating with the first device and the second device, respectively, so as to be capable of transmitting the first partial image to the first device and the second partial image to the second device. That is, the first device and the second device are both independent of the server. The manner of segmenting the second image in the second manner may refer to the manner of segmenting in the first manner, and details thereof are not repeated.
An embodiment of the method of the present application provides an image processing method, as shown in fig. 6, the method includes the following steps:
step 601: determining a combined display parameter of the combined display area;
in this embodiment, the combined display parameter includes a size parameter of at least two sub-display regions constituting the combined display region.
Step 602: determining a first image to be displayed in the combined display area;
step 603: obtaining an image size parameter of the first image;
step 604: and if the image size parameter does not match the size parameter, cropping or filling the first image based on the image size parameter and the size parameter to generate a second image.
Step 604 is a specific implementation of the step "if the first image and the combined display area do not satisfy the matching condition, the first image is processed at least according to the combined display parameter to obtain a second image, and the second image and the combined display area satisfy the matching condition" in the first embodiment of the method.
In this embodiment, the image size parameter is a ratio parameter for a first edge and a second edge of the first image, where the first edge and the second edge are two adjacent edges in the first image. If the first image is a rectangular image, the first side is a long side and the second side is a short side, or the first side is a short side and the second side is a long side.
The combined display parameter may include a size parameter that is a ratio parameter of the first side and the second side of the combined display area, and the size parameter may be 18, taking fig. 5a as an example: 16. wherein, the first edge and the second edge are two adjacent edges in the combined display area.
The combined display parameter may also include a size parameter of a ratio of the first side to the second side of the sub-display regions constituting the combined display region, and for example, as shown in fig. 5a, the size parameter may include two ratio parameters, i.e., 9:16 and 9: 16. Wherein, the first edge and the second edge are two adjacent edges in the sub-display area. And the scale parameter of the first and second sides of the combined display area can be determined based on the scale parameter of the first and second sides of the sub-display area.
In this embodiment, the first edge of the first image corresponding to the first edge of the combined display area is filled in such a manner that the second edge of the first image is longer. As shown in fig. 7a, the size parameter of each sub-display area in the combined display area is 9:16, and then the size parameter of the combined display area is 27: 16. And the size parameter of the first image is 27:14, so the short side of the first image needs to be image-filled, so that the size parameter of the second image generated after filling is 27: 16. The filling may be performed by filling one second edge of the first image, as shown in fig. 7b, or by filling two second edges of the first image, as shown in fig. 7 c.
It should be noted that, in order to avoid the filled image portion being obtrusive in the second image, the filled image may be determined based on the image at the second edge of the first image, so as to achieve the image multi-transition effect.
In this embodiment, the second edge for representing the first image is clipped to correspond to the second edge of the combined display area, and the first edge of the first image is clipped so that the first edge of the first image is shortened. As also shown in fig. 7a, the size parameter of each sub-display area in the combined display area is 9:16, and then the size parameter of the combined display area is 27: 16. And the size parameter of the first image is 29:16, so that image cropping needs to be performed on the long side of the first image, so that the size parameter of the second image generated after cropping is 27: 16. The cropping may be filling in one first edge of the first image, as shown in fig. 7d, or may be cropping two first edges of the first image, as shown in fig. 7 e.
For example, the first image shown in fig. 8a is cropped to obtain the second image shown in fig. 8b, and in addition to this embodiment, the second image can be further divided and then displayed, that is, the second image shown in fig. 8b can be divided to obtain the second image shown in fig. 8c, and further, the display effect on the combined display area shown in fig. 7a is shown in fig. 8 d.
Fifth embodiment of the present application provides an image processing method, as shown in fig. 9, the method includes the following steps:
step 901: determining a combined display parameter of the combined display area;
wherein the combined display parameter at least comprises a position parameter and/or a size parameter of at least two sub-display areas constituting the combined display area.
Step 902: determining a first image to be displayed in the combined display area;
step 903: identifying a primary content feature in the first image;
the primary content feature may be referred to as occupying an area in the first image that is larger than the area occupied by other features in the first image, or the primary content feature may be referred to as being capable of being highlighted in the first image, or the primary feature may be one or more character or scene features in the first image, or the like.
Step 904: and if the main content features and the combined display area do not meet the matching condition, processing the first image according to the combined display parameters at least to obtain a second image. Wherein the second image and the combined display region satisfy a matching condition.
Step 904 is a specific implementation of the first step of the first method embodiment, "if the first image and the combined display area do not satisfy the matching condition, processing the first image according to at least the combined display parameter to obtain a second image, where the second image and the combined display area satisfy the matching condition.
In one mode of not meeting the matching condition, if the main content features in the first image cannot be all displayed in one sub-display area in the combined display area, determining that the matching condition is not met, and processing the main content features to enable the main content features to be all located in one sub-display area;
or, in another mode that the matching condition is not satisfied, if the main content feature in the first image cannot be displayed symmetrically in the combined display position of at least two sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the main content feature can be displayed symmetrically according to the combined display position of at least two sub-display areas in the combined display area through processing.
Alternatively, in another mode in which the matching condition is not satisfied, if the plurality of main content features in the first image cannot be respectively displayed in different sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the plurality of main content features can be respectively located in different sub-display areas by the processing.
In this embodiment, in a first manner, when the main content feature and the combined display area do not satisfy a matching condition, the processing the first image according to at least the combined display parameter to obtain a second image includes:
cropping the first image based on at least the combined display parameter, generating a third image; and filling the third image to generate a second image.
And the cropping part of the first image and the filling part of the third image are different parts so as to ensure that the main content features in the second image and the combined display area meet the matching condition. That is, in order for the main content feature and the combined display region to satisfy the matching condition, the second image may be generated by cropping and then filling the first image.
In the combined display area shown in fig. 10a, for the first image shown in fig. 10b to be a sunrise image at sea, the main content feature is the sun with an inverted image at sea level. If the first image is cropped based on the size parameter without considering the main content features in the first image, a second image as shown in fig. 10c will be generated, and the sun will be separated when the second image is segmented as shown in fig. 10 d.
In the first mode, the cropping is performed by using the cropping mode shown in fig. 10b and fig. 10e, the black part in fig. 10e is the cropping part, so that the blank part in fig. 10e is filled, the second image is generated, the second image is divided as shown in fig. 10f, and further, the display effect on the combined display area shown in fig. 10a is shown in fig. 10g, so that the main content features in the second image are displayed symmetrically according to the combined display position of the two sub-display areas in the combined display area.
As shown in fig. 11a, for the combined display area, as shown in fig. 11b, the first image is an image in which an animal runs at sea, and the main content is the animal running at sea. If the first image is cropped based on the size parameter to generate the second image as shown in fig. 11c without considering the main content features in the first image, the segmentation of the second image will result in the separation of the animal as shown in fig. 11 d.
In the first mode, the cropping mode is adopted as shown in fig. 11b and fig. 11e, wherein the black part in fig. 11e is the cropped part, i.e. a part is cropped on the basis of fig. 11b, the blank part in fig. 11e is filled to generate the second image, and further, the display effect of the second image on the combined display area shown in fig. 11a is controlled as shown in fig. 11f, so that the main content features in the second image are all displayed in one sub-display area.
A second mode, in which the processing the first image to obtain a second image according to at least the combined display parameter includes: magnifying the first image based on at least the combined display parameter to generate a third image; and cutting the third image to generate a second image.
The enlargement of the first image can make the main content feature in the first image larger so as to obtain a third image, and the cropping of the third image can be used for representing the image cropping of the first edge and/or the second edge of the third image, so that the generated main content feature in the second image and the combined display area meet the matching condition.
In a third mode, the processing the first image to obtain a second image according to at least the combined display parameter includes: reducing the first image based on at least the combined display parameter to generate a third image; and filling the third image to generate a second image.
The reduction of the first image can make the main content features in the first image smaller so as to obtain a third image, and the filling of the third image can be used for representing the image filling of the first edge and/or the second edge of the third image, so that the generated main content features in the second image and the combined display area meet the matching condition.
A fourth mode, wherein the processing the first image to obtain the second image according to at least the combined display parameter includes: magnifying the first image based on at least the combined display parameter to generate a third image; cropping the third image to generate a fourth image; and filling the fourth image to generate a second image.
The method comprises the steps of enlarging a first image to enable main content features in the first image to be large so as to obtain a third image, cropping the third image can be used for representing image cropping of a first edge and/or a second edge of the third image so as to generate a fourth image, and filling the fourth image can be used for representing image filling of the first edge and/or the second edge of the fourth image, wherein a cropping portion of the third image and a filling portion of the fourth image are different, so that the main content features in the generated second image and a combined display area meet matching conditions.
Sixth embodiment of the present application provides an image processing method, as shown in fig. 12, the method includes the following steps:
step 1201: determining a combined display parameter of the combined display area;
wherein the combined display parameter comprises a size parameter of at least two sub-display areas constituting the combined display area;
step 1202: determining a first image to be displayed in the combined display area;
step 1203: judging whether the image size parameter and the size parameter meet a matching condition, if not, entering a step 1204:
the image size parameter is a ratio parameter of a first edge and a second edge of the first image, wherein the first edge and the second edge are two adjacent edges in the first image. The combined display parameter may comprise a size parameter which may be a scale parameter of the first side to the second side of the combined display area. And judging whether the image size parameter and the size parameter meet a matching condition, specifically, judging whether the image size parameter and the size parameter are consistent. Further, if not, determining that the matching condition is not met.
If the image size parameter and the size parameter satisfy the matching condition, the subsequent steps may not be performed, in another embodiment, the first image may be controlled to be displayed in the combined display area, and the display manner may refer to the display of the second image in method embodiment two, and details are not described again.
Step 1204: identifying a primary content feature in the first image;
in the case that the matching condition is not satisfied, step 1204 may be triggered to identify the primary content feature in the first image in this embodiment. In another embodiment of the present application, step 1204 may be executed after step 1202, and then step 1203 is executed, otherwise, step 1205 is directly executed.
The primary content feature may be referred to as occupying an area in the first image that is larger than the area occupied by other features in the first image, or the primary content feature may be referred to as being capable of being highlighted in the first image, or the primary feature may be one or more character or scene features in the first image, or the like.
Step 1205: judging whether the main content features and the combined display area meet matching conditions, if not, entering a step 1206;
in one mode of not meeting the matching condition, if the main content features in the first image cannot be all displayed in one sub-display area in the combined display area, determining that the matching condition is not met, and processing the main content features to enable the main content features to be all located in one sub-display area;
or, in another mode that the matching condition is not satisfied, if the main content feature in the first image cannot be displayed symmetrically in the combined display position of at least two sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the main content feature can be displayed symmetrically according to the combined display position of at least two sub-display areas in the combined display area through processing.
Alternatively, in another mode in which the matching condition is not satisfied, if the plurality of main content features in the first image cannot be respectively displayed in different sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the plurality of main content features can be respectively located in different sub-display areas by the processing.
Step 1206: and processing the first image according to the combined display parameter to obtain a second image.
The image size parameter of the second image obtained by processing the first image and the size parameter of the combined display area meet the matching condition, and the main content feature in the second image and the combined display area meet the matching condition. Specifically, that the image size parameter of the second image and the size parameter of the combined display area satisfy the matching condition may be that a ratio parameter of the first edge of the second image and the adjacent second edge is consistent with a ratio parameter of the first edge of the combined display area and the connected second edge.
Corresponding to the image processing method, the embodiment of the device of the application also provides a processing device.
Specifically, as shown in fig. 13, a processing apparatus provided in a first embodiment of the apparatus of the present application includes: a memory 1310, and a processor 1320.
A memory 1310 for storing a program.
A processor 1320, running the program, for determining a combined display parameter of a combined display area, determining a first image to be displayed in the combined display area, and processing the first image to obtain a second image at least according to the combined display parameter if the first image and the combined display area do not satisfy a matching condition.
Wherein the combined display parameters at least comprise position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; the second image and the combined display area satisfy a matching condition.
The different sub-display regions may be different display regions of one display unit, that is, the combined display region may include a combined display region formed by at least two sub-display regions of one display unit. Alternatively, one sub display region may be a display region of one display unit, that is, the combined display region may include a combined display region composed of at least two display units.
The position parameter is used for representing the arrangement position relation of at least two sub-display areas forming the combined display area. The size parameter is used to characterize the size of the combined display area and/or the size of each of at least two sub-display areas that make up the combined display area. The size parameter may be a ratio parameter of two sides in the combined display area or the sub-display area, or may be an area parameter of the combined display area or the sub-display area.
It can be seen that, in the present embodiment, by determining the combined display parameters of the combined display area, where the combined display parameters at least include the position parameters and/or the size parameters of at least two sub-display areas constituting the combined display area, the first image to be displayed in the combined display area is determined, if the first image and the combined display area do not satisfy the matching condition, processing the first image according to at least the combined display parameter to obtain a second image, so that the second image and the combined display area satisfy the matching condition, that is, in the present application, the combined display area includes at least two sub-display areas, and in the case where the first image does not match the combined display area, the first image is processed to obtain the second image, so that the second image is matched with the combined display area, and the combined display area can present better display effect.
An embodiment of the apparatus of this application provides a processing device, and in this embodiment, the processing device is a first device, and as shown in fig. 14, the first device includes:
a memory 1410, a processor 1420, a first display unit 1430, and a communication unit 1440; wherein:
a memory 1410 for storing programs;
a processor 1420, which executes the program, for determining a combined display parameter of a combined display area, determining a first image to be displayed in the combined display area, and processing the first image to obtain a second image at least according to the combined display parameter if the first image and the combined display area do not satisfy a matching condition.
Wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; the second image and the combined display area satisfy a matching condition.
The communication unit 1440 is used for communicating with a second device having a second display unit as an extended display unit of the first device.
The processor 1420 is further configured to control the first image to be displayed in a combined display area configured by the first display unit 1430 and the second display unit.
Then, the first display unit and the second display unit are at least two sub-display areas constituting the combined display area, and the processor 1420 is capable of controlling the second image to be displayed in the first display unit and the second display unit, wherein the display indicates that the first display unit and the second display unit commonly display one second image. In this embodiment, the second display unit includes one or more.
Therefore, in the embodiment, the combined display area is composed of at least the first display unit and the second display unit, and in the case that the first image does not match with the combined display area, the first image is processed to obtain the second image, so that the second image matches with the combined display area, and the combined display area can present a better display effect.
An embodiment of the apparatus of the present application provides a processing device, as shown in fig. 15, the processing device includes: memory 1510, processor 1520, and communication unit 1530.
A memory 1510 for storing a program;
a processor 1520 running the program for determining a combined display parameter of a combined display area, determining a first image to be displayed in the combined display area, and processing the first image to obtain a second image at least according to the combined display parameter if the first image and the combined display area do not satisfy a matching condition.
Wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; the second image and the combined display area satisfy a matching condition.
The processor 1520 is further configured to segment the second image to generate at least a first partial image and a second partial image, and transmit the second partial image to a second device via at least the communication unit 1530.
If the processing device is a first device, the processor 1520 is specifically configured to segment the second image to generate at least a first partial image and a second partial image, control the first display unit of the first device to display the first partial image, and send the second partial image to the second device through at least the communication unit 1530, so that the second display unit of the second device displays the second partial image.
Wherein the combined display area is constituted by at least the first display unit and the second display unit.
That is, the processing device may be a first device and may be capable of communicating with a second device, thereby being capable of transmitting at least the second partial image to the second device, the second device being independent of the first device.
If the processing device is a server, the processor 1520 is specifically configured to segment the second image to generate at least a first partial image and a second partial image, transmit the first partial image to a first device at least through the communication unit 1530, so that a first display unit of the first device displays the first partial image, and transmit the second partial image to a second device at least through the communication unit 1530, so that a second display unit of the second device displays the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit.
That is, the processing device is a server, and both the first device and the second device are independent of the server.
In an embodiment of the device of the present application, the combined display parameter includes a size parameter. Correspondingly, the processor is also used for obtaining the image size parameter of the first image. And when the first image and the combined display area do not meet the matching condition, the processor processes the first image according to at least the combined display parameter to obtain a second image, and the method comprises the following steps: and if the image size parameter does not match the size parameter, cropping or filling the first image based on the image size parameter and the size parameter to generate a second image.
The image size parameter is a ratio parameter of a first edge and a second edge of the first image, wherein the first edge and the second edge are two adjacent edges in the first image.
The size parameter included in the combined display parameter may be a ratio parameter of a first edge and a second edge of the combined display area, wherein the first edge and the second edge are two adjacent edges in the combined display area. The size parameter included in the combined display parameter may also be a ratio parameter of a first edge and a second edge of a sub-display area which is used for forming the combined display area, wherein the first edge and the second edge are two adjacent edges in the sub-display area. And the scale parameter of the first and second sides of the combined display area can be determined based on the scale parameter of the first and second sides of the sub-display area.
In this embodiment, the first edge of the first image used for representing the filling corresponds to the first edge of the combined display area, and the second edge of the first image is filled with the image so that the second edge of the first image is longer.
It should be noted that, in order to avoid the filled image portion being obtrusive in the second image, the filled image may be determined based on the image at the second edge of the first image, so as to achieve the image multi-transition effect.
In this embodiment, the second edge for representing the first image is clipped to correspond to the second edge of the combined display area, and the first edge of the first image is clipped so that the first edge of the first image is shortened.
In a fifth embodiment of the device of the present application, the processor is further configured to identify a primary content feature in the first image. The main content feature may be represented as an area occupied in the first image larger than areas occupied by other features in the first image, or the main content feature may be represented as a feature capable of being highlighted in the first image, or the main feature may be one or more characters or scene features in the first image, or the like.
Correspondingly, when the first image and the combined display area do not meet the matching condition, the processor processes the first image according to at least the combined display parameter to obtain a second image, including: and if the main content features and the combined display area do not meet the matching condition, processing the first image according to the combined display parameters at least to obtain a second image.
In one mode of not meeting the matching condition, if the main content features in the first image cannot be all displayed in one sub-display area in the combined display area, determining that the matching condition is not met, and processing the main content features to enable the main content features to be all located in one sub-display area;
or, in another mode that the matching condition is not satisfied, if the main content feature in the first image cannot be displayed symmetrically in the combined display position of at least two sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the main content feature can be displayed symmetrically according to the combined display position of at least two sub-display areas in the combined display area through processing.
Alternatively, in another mode in which the matching condition is not satisfied, if the plurality of main content features in the first image cannot be respectively displayed in different sub-display areas in the combined display area, it is determined that the matching condition is not satisfied, and the plurality of main content features can be respectively located in different sub-display areas by the processing.
In this embodiment, in a first manner, when the main content feature and the combined display area do not satisfy a matching condition, the processing the first image according to at least the combined display parameter to obtain a second image includes:
cropping the first image based on at least the combined display parameter, generating a third image; and filling the third image to generate a second image.
And the cropping part of the first image and the filling part of the third image are different parts so as to ensure that the main content features in the second image and the combined display area meet the matching condition. That is, in order for the main content feature and the combined display region to satisfy the matching condition, the second image may be generated by cropping and then filling the first image.
A second mode, in which the processing the first image to obtain a second image according to at least the combined display parameter includes: magnifying the first image based on at least the combined display parameter to generate a third image; and cutting the third image to generate a second image.
The enlargement of the first image can make the main content feature in the first image larger so as to obtain a third image, and the cropping of the third image can be used for representing the image cropping of the first edge and/or the second edge of the third image, so that the generated main content feature in the second image and the combined display area meet the matching condition.
In a third mode, the processing the first image to obtain a second image according to at least the combined display parameter includes: reducing the first image based on at least the combined display parameter to generate a third image; and filling the third image to generate a second image.
The reduction of the first image can make the main content features in the first image smaller so as to obtain a third image, and the filling of the third image can be used for representing the image filling of the first edge and/or the second edge of the third image, so that the generated main content features in the second image and the combined display area meet the matching condition.
A fourth mode, wherein the processing the first image to obtain the second image according to at least the combined display parameter includes: magnifying the first image based on at least the combined display parameter to generate a third image; cropping the third image to generate a fourth image; and filling the fourth image to generate a second image.
The method comprises the steps of enlarging a first image to enable main content features in the first image to be large so as to obtain a third image, cropping the third image can be used for representing image cropping of a first edge and/or a second edge of the third image so as to generate a fourth image, and filling the fourth image can be used for representing image filling of the first edge and/or the second edge of the fourth image, wherein a cropping portion of the third image and a filling portion of the fourth image are different, so that the main content features in the generated second image and a combined display area meet matching conditions.
In a sixth embodiment of the apparatus of the present application, a processing device comprises a memory and a processor;
the memory is used for storing programs;
the processor runs a program for determining a combined display parameter of the combined display area; a first image to be displayed in the combined display area is determined.
The processor is further used for judging whether the image size parameter and the size parameter meet a matching condition; if not, identifying the main content features in the first image; and if the main content features and the combined display area do not meet the matching condition, processing the first image according to the combined display parameters at least to obtain a second image.
The image size parameter and the size parameter of the second image satisfy a matching condition, and the main content feature and the combined display area of the second image satisfy a matching condition.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method comprising:
determining a combined display parameter of the combined display area; wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area;
determining a first image to be displayed in the combined display area;
and if the first image and the combined display area do not meet the matching condition, processing the first image at least according to the combined display parameter to obtain a second image, wherein the second image and the combined display area meet the matching condition.
2. The method of claim 1, further comprising:
controlling the second image to be displayed in a combined display area formed by a display unit of the first device and an extended display unit, wherein the extended display unit is positioned on a second device connected with the first device;
or, the second image is divided to generate at least a first partial image and a second partial image, and at least the second partial image is transmitted to a second device;
wherein dividing the second image to generate at least a first partial image and a second partial image, and transmitting at least the second partial image to a second device comprises:
dividing the second image to generate at least a first partial image and a second partial image, controlling a first display unit of a first device to display the first partial image, and transmitting at least the second partial image to a second device so that a second display unit of the second device displays the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit;
or, the second image is divided to generate at least a first partial image and a second partial image, at least the first partial image is transmitted to a first device to cause a first display unit of the first device to display the first partial image, and at least the second partial image is transmitted to a second device to cause a second display unit of the second device to display the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit.
3. The method of claim 1, the combined display parameter comprising a size parameter; the method further comprises the following steps:
obtaining an image size parameter of the first image;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises:
if the image size parameter does not match the size parameter, cropping or filling the first image based on the image size parameter and the size parameter to generate a second image;
the image size parameter is a proportion parameter of a first edge and a second edge of the first image, and the size parameter is a proportion parameter of the first edge and the second edge of the combined display area;
the filling is used for representing that a first edge of the first image corresponds to a first edge of the combined display area and filling a second edge of the first image with images;
and the cropping is used for representing that the second edge of the first image corresponds to the second edge of the combined display area, and the first edge of the first image is subjected to image cropping.
4. The method of claim 1, further comprising:
identifying a primary content feature in the first image;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises: and if the main content features and the combined display area do not meet the matching condition, processing the first image according to the combined display parameters at least to obtain a second image.
5. The method of claim 4, wherein the first and second light sources are selected from the group consisting of,
the processing the first image to obtain a second image at least according to the combined display parameter comprises:
magnifying the first image based on at least the combined display parameter to generate a third image;
clipping the third image to generate a second image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
cropping the first image based on at least the combined display parameter, generating a third image;
filling the third image to generate a second image; wherein the cropped portion of the first image is a different portion than the padded portion of the third image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
reducing the first image based on at least the combined display parameter to generate a third image;
filling the third image to generate a second image;
or, the processing the first image to obtain a second image at least according to the combined display parameter includes:
magnifying the first image based on at least the combined display parameter to generate a third image;
cropping the third image to generate a fourth image;
filling the fourth image to generate a second image; wherein the cropped portion of the third image is a different portion than the filled portion of the fourth image.
6. The method of claim 4, wherein the processing causes the main content features to be located entirely in one sub-display region, or wherein the main content features are displayed symmetrically depending on a combined display position of at least two sub-display regions in the combined display region.
7. The method of claim 4, the combined display parameter comprising a size parameter, the method further comprising:
obtaining an image size parameter of the first image;
judging whether the image size parameter and the size parameter meet a matching condition;
if not, judging whether the main content features and the combined display area meet matching conditions;
correspondingly, if the first image and the combined display area do not meet the matching condition, processing the first image according to the combined display parameter at least to obtain a second image comprises:
if the image size parameter and the size parameter do not satisfy the matching condition, the main content feature and the combined display area do not satisfy the matching condition, the first image is processed at least according to the combined display parameter to obtain a second image, the image size parameter and the size parameter of the second image satisfy the matching condition, and the main content feature and the combined display area of the second image satisfy the matching condition.
8. A processing device, comprising:
a memory for storing a program;
a processor running the program for determining a combined display parameter of a combined display area, determining a first image to be displayed in the combined display area, and processing the first image according to the combined display parameter at least to obtain a second image if the first image and the combined display area do not satisfy a matching condition; wherein the combined display parameters comprise at least position parameters and/or size parameters of at least two sub-display areas constituting the combined display area; the second image and the combined display area satisfy a matching condition.
9. The processing device of claim 8, the processing device being a first device, the first device further comprising:
a first display unit and a communication unit;
the communication unit is used for communicating with a second device, the second device is provided with a second display unit, and the second display unit is used as an expansion display unit of the first device;
the processor is further configured to control the first image to be displayed in a combined display area formed by the first display unit and the second display unit.
10. The processing device of claim 8, further comprising:
a communication unit;
the processor is further configured to segment the second image to generate at least the first partial image and the second partial image, and transmit at least the second partial image to a second device via the communication unit;
wherein the content of the first and second substances,
if the processing device is a first device, the processor is specifically configured to segment the second image to generate at least a first partial image and a second partial image, control a first display unit of the first device to display the first partial image, and send the second partial image to a second device at least through the communication unit, so that a second display unit of the second device displays the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit;
if the processing device is a server, the processor is specifically configured to segment the second image to generate at least a first partial image and a second partial image, transmit the first partial image to a first device at least through the communication unit so as to cause a first display unit of the first device to display the first partial image, and transmit the second partial image to a second device at least through the communication unit so as to cause a second display unit of the second device to display the second partial image; wherein the combined display area is constituted by at least the first display unit and the second display unit.
CN202010245500.7A 2020-03-31 2020-03-31 Image processing method and processing equipment Active CN111372009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245500.7A CN111372009B (en) 2020-03-31 2020-03-31 Image processing method and processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245500.7A CN111372009B (en) 2020-03-31 2020-03-31 Image processing method and processing equipment

Publications (2)

Publication Number Publication Date
CN111372009A true CN111372009A (en) 2020-07-03
CN111372009B CN111372009B (en) 2021-09-14

Family

ID=71212153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245500.7A Active CN111372009B (en) 2020-03-31 2020-03-31 Image processing method and processing equipment

Country Status (1)

Country Link
CN (1) CN111372009B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111585A1 (en) * 2020-11-26 2022-06-02 华为技术有限公司 Image picture self-adaptive cropping method and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1719398A (en) * 2004-07-08 2006-01-11 腾讯科技(深圳)有限公司 Image partitioning display method and device
CN101644998A (en) * 2008-08-08 2010-02-10 深圳华强三洋技术设计有限公司 Multiple image display device and image display device
CN103985373A (en) * 2014-05-07 2014-08-13 青岛海信电器股份有限公司 Image processing method and device applied to tiled display equipment
CN109308174A (en) * 2018-10-10 2019-02-05 烟台职业学院 Across screen picture splicing control method
CN109729336A (en) * 2018-12-11 2019-05-07 维沃移动通信有限公司 A kind of display methods and device of video image
US20200033995A1 (en) * 2018-07-26 2020-01-30 At&T Intellectual Property I, L.P. Surface Interface
US10573348B1 (en) * 2013-12-22 2020-02-25 Jasmin Cosic Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1719398A (en) * 2004-07-08 2006-01-11 腾讯科技(深圳)有限公司 Image partitioning display method and device
CN101644998A (en) * 2008-08-08 2010-02-10 深圳华强三洋技术设计有限公司 Multiple image display device and image display device
US10573348B1 (en) * 2013-12-22 2020-02-25 Jasmin Cosic Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
CN103985373A (en) * 2014-05-07 2014-08-13 青岛海信电器股份有限公司 Image processing method and device applied to tiled display equipment
US20200033995A1 (en) * 2018-07-26 2020-01-30 At&T Intellectual Property I, L.P. Surface Interface
CN109308174A (en) * 2018-10-10 2019-02-05 烟台职业学院 Across screen picture splicing control method
CN109729336A (en) * 2018-12-11 2019-05-07 维沃移动通信有限公司 A kind of display methods and device of video image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111585A1 (en) * 2020-11-26 2022-06-02 华为技术有限公司 Image picture self-adaptive cropping method and electronic device

Also Published As

Publication number Publication date
CN111372009B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
US20090002368A1 (en) Method, apparatus and a computer program product for utilizing a graphical processing unit to provide depth information for autostereoscopic display
CN112114928B (en) Processing method and device for display page
US8952989B2 (en) Viewer unit, server unit, display control method, digital comic editing method and non-transitory computer-readable medium
CN113126862B (en) Screen capture method and device, electronic equipment and readable storage medium
CN111552530A (en) Terminal screen adapting method, device and equipment for user interface
CN112055244B (en) Image acquisition method and device, server and electronic equipment
CN110580678A (en) image processing method and device
CN113015007B (en) Video frame inserting method and device and electronic equipment
CN113570626B (en) Image cropping method and device, computer equipment and storage medium
CN109272526B (en) Image processing method and system and electronic equipment
CN111372009B (en) Image processing method and processing equipment
CN105389308B (en) Webpage display processing method and device
CN111913343B (en) Panoramic image display method and device
CN107612881B (en) Method, device, terminal and storage medium for transmitting picture during file transmission
JP2011192008A (en) Image processing system and image processing method
CN113949900B (en) Live broadcast mapping processing method, system, equipment and storage medium
CN110941413B (en) Display screen generation method and related device
EP4020159A1 (en) Image processing method and apparatus, and content sharing method and device
CN110223367B (en) Animation display method, device, terminal and storage medium
JP3991061B1 (en) Image processing system
CN113126942A (en) Display method and device of cover picture, electronic equipment and storage medium
CN110519530A (en) Hardware based picture-in-picture display methods and device
CN111158844B (en) Schedule display method and related equipment
CN110473146B (en) Remote sensing image display method and device, storage medium and computer equipment
CN116610244A (en) Thumbnail display control method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant