CN110738598A - Image adaptation method, electronic device and storage medium - Google Patents
Image adaptation method, electronic device and storage medium Download PDFInfo
- Publication number
- CN110738598A CN110738598A CN201910801673.XA CN201910801673A CN110738598A CN 110738598 A CN110738598 A CN 110738598A CN 201910801673 A CN201910801673 A CN 201910801673A CN 110738598 A CN110738598 A CN 110738598A
- Authority
- CN
- China
- Prior art keywords
- image
- area
- stretching
- region
- screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000006978 adaptation Effects 0.000 title claims abstract description 29
- 230000011218 segmentation Effects 0.000 claims description 28
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention relates to the field of image processing, in particular to image adaptation methods, electronic equipment and a storage medium.
Description
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to image adaptation methods, electronic equipment and storage media.
Background
However, the inventor finds that in the related art, when a user displays an image on the electronic device in a full screen mode, the image is usually uniformly zoomed in a horizontal direction and a vertical direction to fill the whole screen, but the image is easily stretched and deformed due to the uniform zooming, so that the viewing experience of the user is poor.
Disclosure of Invention
An object of an embodiment of the present invention is to provide image adaptation methods, electronic devices, and storage media, which can display an image in a full screen while ensuring that a partial area in the image is not deformed, so as to provide a better image viewing experience.
In order to solve the technical problem, the image adaptation method includes scaling an image according to the size of the image and the size of a screen to be adapted, acquiring a stretching area in the scaled image according to a key area in the image, and stretching the stretching area according to the size of the image and the size of the screen to be adapted.
The embodiment of the invention also provides electronic devices, which comprise at least processors and a memory, wherein the memory is in communication connection with the at least processors, and the memory stores instructions executable by the at least processors, and the instructions are executed by the at least processors, so that the at least processors can execute the image adaptation method.
Embodiments of the present invention also provide computer-readable storage media storing a computer program that, when executed by a processor, implements the image adaptation method described above.
Compared with the prior art, the method and the device have the advantages that the image is scaled in an equal proportion according to the size of the image and the size of a screen to be adapted; acquiring a stretching area in the zoomed image according to the key area in the image; stretching the stretching area according to the size of the image and the size of the screen to be adapted. Because the image is scaled according to the size of the image and the size of the screen to be adapted, the content in the image is not distorted after being scaled; then, a stretching area in the zoomed image is obtained according to the key area in the image, and the stretching area is stretched according to the size of the image and the size of the screen to be adapted, so that the stretched image can be displayed on the screen to be adapted in a full screen mode, and the watching requirement on the image is not influenced; after the image is scaled in equal proportion, the stretched area is the stretching area, namely the key area is not stretched, the image of the key area is not distorted, and the key area after being scaled in equal proportion can be kept complete and normally displayed on the screen to be adapted; in addition, because the important regions focused on in the image may be different under different conditions, the above method fully considers the difference of the important regions under different conditions, and can ensure that the important regions in the image are not stretched and deformed under the condition that the size of the image changes due to the adaptation of the size of the screen, so as to provide better visual experience.
In addition, the acquiring a stretched region in the zoomed image according to the key region in the image includes: acquiring the minimum value c of the abscissa of the key area in the image after zoomingminMaximum value of abscissa cmaxThe minimum value r of the ordinateminMaximum value r of ordinatemax(ii) a Obtaining the aspect ratio of the image, andacquiring the aspect ratio of the screen to be adapted; if the aspect ratio of the screen to be adapted is larger than or equal to the aspect ratio of the image, the abscissa in the zoomed image is smaller than cminArea and abscissa greater than cmaxAs the stretch region; if the aspect ratio of the screen to be adapted is smaller than that of the image, the vertical coordinate in the zoomed image is smaller than rminHas an area and ordinate greater than rmaxThe methods for obtaining the stretching area are provided, wherein the area to be stretched is defined according to the minimum value and the maximum value of the horizontal and vertical coordinates of the important area, instead of all areas except the important area in the image are defined as the area to be stretched, when the aspect ratio of the screen to be adapted is larger than or equal to the aspect ratio of the image, the image after being scaled does not adapt to the width of the screen to be adapted in the horizontal direction, the stretching area is defined according to the horizontal coordinates of the important area in the horizontal direction, when the aspect ratio of the screen to be adapted is smaller than the aspect ratio of the image, the image after being scaled is not adapted to the height of the screen to be adapted in the vertical direction, the stretching area is defined according to the vertical coordinates of the important area in the vertical direction, and the manner of dividing the stretching area ensures that the important area can be displayed in the image completely and normally, and the visual experience is better.
In addition, the stretching area according to the size of the image and the size of the screen to be adapted comprises horizontally stretching the stretching area according to a th stretching coefficient lambda if the aspect ratio of the screen to be adapted is larger than or equal to the aspect ratio of the image, wherein lambda is calculated by the following formula:
wherein w 'represents the width of the screen to be adapted, w represents the width of the image, h' represents the height of the screen to be adapted, and h represents the height of the image; if the aspect ratio of the screen to be adapted is smaller than that of the image, vertically stretching the stretching area according to a second stretching coefficient delta; wherein δ is calculated by the following formula:
the above provides ways to stretch the stretching area, that is, the stretching coefficient is calculated by the size of the image and the size of the screen to be adapted in both the horizontal direction and the vertical direction, and the stretching area is stretched according to the stretching coefficient, so that the stretched image can be displayed on the screen to be adapted in a full screen mode.
Further, when the stretch zone is comprised of the th sub-zone and the second sub-zone, the horizontally stretching the stretch zone according to the th stretch factor λ includes horizontally stretching according to the th sub-zoneHorizontally stretching the th sub-region according to the horizontal stretch factor of the second sub-regionHorizontally stretching the second subregion; wherein,andcalculated by the following formula:
wherein,indicating the width of the th sub-region,representing the width of said second sub-region, said vertically stretching said stretched zone according to a second stretch factor delta comprising a vertical stretch factor according to said th sub-regionVertically stretching the th sub-region, and depending on the vertical stretch factor of the second sub-regionVertically stretching the second subregion; wherein,andcalculated by the following formula:
wherein, theRepresents the height of the th sub-regionThe above provides another ways of stretching the stretching region, that is, when the stretching region can be further divided into sub-regions, the sub-regions are stretched according to different stretching coefficients, different stretching coefficients are calculated by the above formula, and the sub-regions are stretched respectively, so that the deformation of the sub-regions close to the key region after stretching is relatively small, and the key region can be displayed in the image more completely and normally, and the visual experience is better.
In addition, the key area in the image is obtained by the following method: segmenting the image based on the entity in the image to obtain a segmentation region; and acquiring a key area in the image from the segmentation area.
It can be understood that the labels determined according to the entities can be used for representing names of the entities, such as names of articles or names of people, the existence of the labels facilitates clear and convenient knowledge of the entities corresponding to the divided areas, and the step further facilitates rapid retrieval of the key areas from the divided areas according to the names of the entities represented by the labels.
The method for determining the labels of the segmentation areas according to the entities in the images comprises the steps of judging whether the face information obtained through identification is stored in a face library or not if the face information is identified from any entity for any entity, obtaining the labels corresponding to the face information obtained through identification from the face library if the face information is stored in the face library, using the labels as the labels of the segmentation areas corresponding to any entity, distributing the corresponding labels for the face information obtained through identification if the face information obtained through identification is not stored in the face library, storing the corresponding relations between the face information obtained through identification and the distributed labels in the face library, providing a mode of determining the labels of the segmentation areas, conveniently and directly obtaining the labels corresponding to the face information from the face library when the entities comprise the face information and the face information obtained through identification is stored in the face library, and actively obtaining the labels corresponding relations between the face information obtained through identification from the face library when the face information obtained through identification is not stored in the face library.
In addition, acquiring the minimum value c of the abscissa of the key area in the image after zoomingminMaximum value of abscissa cmaxThe minimum value r of the ordinateminLongitudinal seatNominal maximum value rmaxThe method comprises the following steps: obtaining a matrix of the zoomed image through an example segmentation algorithm; wherein the value of the element in the matrix corresponding to the key region is greater than zero;
wherein, the xr,cRepresenting the elements in the row R, column C of the matrix, the value of R being equal to the vertical dimension of the scaled image and the value of C being equal to the horizontal dimension of the scaled image. Because the matrix of the image can be obtained by processing the zoomed image through the example segmentation algorithm, and the example segmentation algorithm can accurately segment the edge contour of the entity in the image, the minimum value and the maximum value of the horizontal and vertical coordinates of the key area can be quickly and accurately obtained through the method.
Drawings
the various embodiments are illustrated by corresponding drawings and are not intended to limit the embodiments.
FIG. 1 is a flow chart of an image adaptation method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific implementation of step 102 in an embodiment of the present invention;
FIG. 3 is a schematic illustration of an original image according to an th embodiment of the invention;
FIG. 4 is a schematic illustration of coordinates of an area of interest according to an embodiment of the present invention;
FIG. 5 is a schematic representation of stretch zones in an th embodiment according to the present invention;
FIG. 6 is a schematic representation of another stretch zones in an embodiment according to the present invention;
FIG. 7 is a schematic representation of an image after stretching according to a stretch factor λ in an th embodiment of the invention;
FIG. 8 is a schematic representation of an image after stretching according to a second stretch factor δ according to an embodiment of the present invention;
FIG. 9 is a flow chart of an image adaptation method according to a second embodiment of the present invention;
FIG. 10 is a schematic illustration of stretch zones in a second embodiment according to the invention;
FIG. 11 is a schematic illustration of an image after stretching in accordance with a second embodiment of the present invention;
fig. 12 is a flowchart of a manner of acquiring a region of interest according to the third embodiment of the present invention;
FIG. 13 is a schematic view of a segmentation zone in accordance with a third embodiment of the invention;
FIG. 14 is a schematic view of a label segmenting regions in accordance with a third embodiment of the present invention;
fig. 15 is a block diagram showing the configuration of an electronic apparatus according to the fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
An embodiment of the present invention relates to image adaptation methods, and a specific flow is shown in fig. 1, including:
step 101, scaling the image in equal proportion according to the size of the image and the size of a screen to be adapted;
102, acquiring a stretching area in the zoomed image according to a key area in the image;
step 103, stretching the stretching area according to the size of the image and the size of the screen to be adapted.
The following describes the implementation details of the image adaptation method of the present embodiment in detail, and the following only provides details for easy understanding and is not necessary to implement the present embodiment.
In step 101, scaling the image according to the size of the image and the size of the screen to be adapted; it can be understood that the height of the image after the equal scaling is adapted to the height of the screen to be adapted, or the width of the image after the equal scaling is adapted to the width of the screen to be adapted; the image after the equal scaling has the basis of full-screen display on the screen to be adapted, and the image is still completely and normally displayed at the moment and is not distorted and deformed.
In examples, let the size of the image be w h and the size of the screen to be adapted be w '. h', when the aspect ratio w '/h' of the screen to be adapted is greater than or equal to the aspect ratio w/h of the image, i.e. w '/h'. gtoreq.w/h, the image is scaled according to the scaling ratio h '/h, and when the aspect ratio w'/h 'of the screen to be adapted is less than the aspect ratio w/h of the image, i.e. w'/h '< w/h, the image is scaled according to the scaling ratio w'/w.
In step 102, a stretching area in the zoomed image is obtained according to a key area in the image; in this embodiment, the key region may be a region that is automatically identified after the image is identified according to a machine learning algorithm, for example, when the image includes a portrait, the portrait is automatically identified as the key region; the key area may also be an area of interest that is selected by the user according to personal needs, for example, when the image displayed on the screen to be adapted includes a portrait a and a portrait B, the user selects the portrait a by interacting with the screen to be adapted, and when receiving a selection operation of the user on the portrait a, the portrait a is taken as the key area.
The method comprises the steps of determining an important region, defining a region to be stretched according to the minimum value and the maximum value of horizontal and vertical coordinates of the important region, and not defining all regions except the important region in the image as the region to be stretched, wherein stretching region acquisition modes, namely a specific implementation mode of a step 102, are provided.
Step 1021, obtaining the minimum value c of the abscissa of the key area in the zoomed imageminMaximum value of abscissa cmaxThe minimum value r of the ordinateminMaximum value r of ordinatemax。
examples, the image with the determined key area is processed by an example segmentation algorithm, the output is the mask matrix corresponding to the image, the size of the mask matrix is equal to the size of the zoomed image, wherein, the value of the elements composing the matrix is 0 or more than 0 (for example, 1), the element with the value more than 0 is the element corresponding to the key area, therefore, the minimum value and the maximum value of the key area can be obtained according to the value of each element in the mask matrix, namely:
wherein, the xr,cRepresenting the elements in the row R and column C of the matrix, the value of R being equal to the vertical dimension of the scaled image and the value of C being equal to the horizontal dimension of the scaled image.
Step 1022, the aspect ratio of the image and the aspect ratio of the screen to be adapted are obtained.
Specifically, as explained in step 101, let the size of the image be w × h, the size of the screen to be adapted be w '× h', the aspect ratio of the screen to be adapted be w '/h', and the aspect ratio of the image be w/h.
Step 1023, judging whether the aspect ratio of the screen to be adapted is smaller than that of the image; if yes, go to step 1024, otherwise go to step 1025.
Step 1024, the vertical coordinate in the zoomed image is smaller than rminHas an area and ordinate greater than rmaxThe area of (a) serves as a stretch area.
Step 1025, the abscissa in the zoomed image is less than cminArea and abscissa greater than cmaxThe area of (a) serves as a stretch area.
Specifically, when the aspect ratio of the screen to be adapted is greater than or equal to the aspect ratio of the image, the image after the equal scaling does not adapt to the width of the screen to be adapted in the horizontal direction, and then a stretching area is defined according to the abscissa of the key area in the horizontal direction; when the aspect ratio of the screen to be adapted is smaller than that of the image, the image after the equal scaling does not adapt to the height of the screen to be adapted in the vertical direction, and the stretching area is defined according to the ordinate of the key area in the vertical direction, so that the mode of dividing the stretching area ensures that the key area can be displayed in the image more completely and normally in the non-stretched area, and the visual experience is better.
In examples, the original image shown in fig. 3 was provided, the original image shown in fig. 3 was mainly composed of "person 001", "horse", and "car", and the important areas in the image were determined as "person 001" and "horse", that is, the "horse" were providedThe outline of the figure 001 and the outline of the horse are combined to form an area. Acquiring the minimum value c of the abscissa of the key area according to an area formed by the outline of the ' person 001 ' and the outline of the ' horseminMaximum value of abscissa cmaxThe minimum value r of the ordinateminMaximum value r of ordinatemaxFig. 4 shows a schematic coordinate diagram of the emphasized region.
When the aspect ratio w '/h' of the screen to be adapted is larger than or equal to the aspect ratio w/h of the image, namely w '/h' is larger than or equal to w/h, the abscissa in the image after zooming is smaller than cminArea and abscissa greater than cmaxAs stretching areas, according to c in the coordinate diagram of the emphasized area shown in fig. 4 in examplesminAnd cmaxObtaining a schematic diagram of kinds of stretching regions as shown in FIG. 5, wherein the region A in FIG. 5 is the scaled image with the abscissa smaller than cminThe B area is the area with the abscissa larger than c in the zoomed imagemaxThe area of (a).
When the aspect ratio w '/h' of the screen to be adapted is smaller than the aspect ratio w/h of the image, namely w '/h' < w/h, the ordinate in the image after zooming is smaller than rminHas an area and ordinate greater than rmaxAs stretching areas, according to r in the coordinate diagram of the emphasized area shown in fig. 4 in examplesminAnd rmaxObtaining a schematic diagram of kinds of stretching regions as shown in FIG. 6, wherein the D region in FIG. 6 is the region with the ordinate less than rminArea C is the area with the ordinate larger than rmaxThe area of (a).
In step 103, in order to enable the stretched image to be displayed on the screen to be adapted in a full screen manner, a stretching coefficient is calculated to stretch the stretching area according to the size of the image and the size of the screen to be adapted.
When the aspect ratio w '/h ' of the screen to be adapted is larger than or equal to the aspect ratio w/h of the image, namely w '/h ' ≧ w/h, the height of the image after being scaled according to the scaling ratio h '/h is matched with the height of the screen to be adapted, but the width is not matched, so that the stretching coefficient lambda is calculated, and the stretching area is horizontally stretched, wherein the stretching coefficient lambda is calculated by the following formula:
wherein w 'represents the width of the screen to be adapted, w represents the width of the image, h' represents the height of the screen to be adapted, and h represents the height of the image.
In examples, when the stretching regions shown in fig. 5 are the a region and the B region, the image stretched according to the th stretching coefficient λ is shown in fig. 7, and the stretched image in fig. 7 is displayed in full screen on the screen to be adapted, but the emphasized regions "person 001" and "horse" are not distorted.
When the aspect ratio w '/h ' of the screen to be adapted is smaller than the aspect ratio w/h of the image, namely w '/h ' < w/h, the width of the image after being scaled according to the scaling ratio w '/w is matched with the width of the screen to be adapted, but the height is not matched, so that a second stretching coefficient delta is calculated, and the stretching area is vertically stretched; wherein δ is calculated by the following formula:
in examples, when the stretching regions shown in fig. 6 are the C region and the D region, the image after stretching according to the second stretching coefficient δ is shown in fig. 8, and the stretched image in fig. 8 is displayed on the screen to be adapted in full screen, but the emphasized regions "person 001" and "horse" are not distorted.
When a user watches video segments in full screen through electronic equipment with different screen sizes, the video is usually uniformly stretched and scaled in horizontal and vertical directions until the whole screen is filled, but the image of the video is easily deformed and distorted greatly due to uniform stretching and scaling, so that when the video is played on the electronic equipment, the video frames forming the video are firstly scaled in equal proportion according to the size of the video and the size of the screen of the electronic equipment, a stretching area in the scaled video frames is obtained according to a key area in the video frames, and the stretching area is stretched, so that the video formed by the scaled and stretched video frames is displayed on the screen of the electronic equipment in full screen, and the key area in the video is not distorted.
Compared with the prior art, the method has the advantages that firstly, the image is scaled in an equal proportion according to the size of the image and the size of the screen to be adapted, so that the content in the image is not distorted after being scaled in an equal proportion; and then, acquiring a stretching area in the zoomed image according to the minimum value and the maximum value of the horizontal and vertical coordinates of the key area in the image, instead of taking all areas except the key area in the image as areas needing stretching, so that in the areas which are not stretched, the key area can be more completely and normally displayed in the image, and the visual experience is better. After the stretching area is obtained, the stretching area is stretched according to the size of the image and the size of the screen to be adapted, so that the stretched image can be displayed on the screen to be adapted in a full screen mode, and the watching requirement of the image is not influenced; after the image is scaled in equal proportion, the stretched area is the stretching area, namely the key area is not stretched, the image of the key area is not distorted, and the key area after being scaled in equal proportion can be kept complete and normally displayed on the screen to be adapted; in addition, because the important regions focused on in the image may be different under different conditions, the above method fully considers the difference of the important regions under different conditions, and can ensure that the important regions in the image are not stretched and deformed under the condition that the size of the image changes due to the adaptation of the size of the screen, so as to provide better visual experience.
The second embodiment of the present invention relates to image fitting methods, and this embodiment is substantially the same as embodiment , and the second embodiment of the present invention provides another methods of stretching the stretched area, and a flowchart of the image fitting method in this embodiment is shown in fig. 9, and the flowchart of fig. 9 will be described in detail below:
Specifically, in step 103 of the , the stretch zone is stretched according to calculated stretch factors, and in this step 203, if the stretch zone is composed of two zones, i.e., th sub-zone and second sub-zone, the stretch factors of the sub-zones are calculated according to calculated stretch factors, and the sub-zones are stretched.
In cases, when the stretch zones are the A zone and the B zone as shown in FIG. 5, the A zone can be further divided into the sub-zone (A1 zone) and the second sub-zone (A2 zone), i.e., the A zone is composed of the A1 zone and the A2 zone, as shown in FIG. 10, which is a schematic diagram of the stretch zones, wherein the A2 zone is closer to the emphasis zone composed of "person 001" and "horse". The A zone can be horizontally stretched according to the stretch factor λ according to the horizontal stretch factor of the A1 zoneHorizontally stretching the A1 region according to the horizontal stretch factor of the A2 regionHorizontal stretch a2 area; wherein,andcalculated by the following formula:
Meanwhile, since the B region in FIG. 10 is not divided into the th sub-region and the second sub-region, the B region can be directly stretched according to the th stretch coefficient lambda, as shown in FIG. 10, according to the horizontal stretch coefficient of the A1 regionHorizontal stretching of region A1 according to the horizontal stretch factor of region A2After the a2 area is horizontally stretched and the B area is horizontally stretched according to the th stretching coefficient lambda, the image is as shown in fig. 11, the stretched image in fig. 11 is displayed on the screen to be fitted in a full screen, but the emphasized areas "person 001" and "horse" are not distorted, and the deformation of the a2 area close to the emphasized area caused by stretching is smaller than that of the a1 area caused by stretching.
Likewise, it will be appreciated that when the stretch zone as shown in fig. 6 is a C-zone and a D-zone, and the C-zone and/or the D-zone is comprised of a th sub-zone and a second sub-zone, then the stretch zone is vertically stretched according to a second stretch factor δ, including:
according to the vertical stretch coefficient of the th sub-areaVertical stretching thA sub-region; and according to the vertical stretch coefficient of the second sub-areaVertically stretching the second subregion; wherein,andcalculated by the following formula:
Different stretching coefficients are calculated through the formula, the sub-regions are stretched respectively, so that deformation of the sub-regions close to the key region after stretching is relatively small, the key region can be displayed in the image completely and normally, and the visual experience feeling is good.
Compared with the prior art, the embodiment provides another ways of stretching the stretched region, that is, when the stretched region is composed of the sub-region and the second sub-region (that is, the stretched region can be further divided into the sub-region and the second sub-region), the sub-regions are stretched according to different stretching coefficients, different stretching coefficients are calculated by the above formula, and the sub-regions are stretched respectively, so that the deformation of the sub-regions close to the important region after stretching is relatively small, the important region can be displayed in the image relatively completely and normally, and the visual experience is better.
The third embodiment of the present invention relates to image adaptation methods, which are substantially the same as those in the embodiment, and in the third embodiment of the present invention, on the basis of the image adaptation method in the embodiment, ways of acquiring important regions are provided, so that the flowchart of the image adaptation method in the present embodiment is as shown in fig. 1, the flowchart of the way of acquiring important regions is as shown in fig. 12, and the flow of fig. 12 is specifically described below:
step 301, segmenting the image based on the entities in the image to obtain segmented regions.
Specifically, the entity in the image can be understood as an entity person or an entity, the image can be divided into a plurality of divided regions based on the outline of the entity in the image, each of the divided regions corresponds to entity persons or entities, in examples, the division manner in the step can be realized by an example division algorithm, the example division algorithm can clearly divide the outline of the entity, for example, the divided regions in the image processed by the example division algorithm are shown in fig. 13, and the divided regions in fig. 13 include a "person" divided region, a "horse" divided region and a "car" divided region.
Step 302, determining labels of corresponding segmentation areas according to entities in the image.
Specifically, the label of the corresponding segmented region determined according to the entity can be used to characterize the name of the entity; for example, according to the entities in fig. 13, the label "person 001" may be assigned to the divided region of "person" in fig. 13, the label "horse" may be assigned to the divided region of "horse", and the label "car" may be assigned to the divided region of "car", and the labels of the divided regions are as shown in fig. 14. Considering that the outline of the segmented region corresponding to the entity may not accurately reflect what the entity is, the label is set for the segmented region, and the existence of the label is helpful for clearly and conveniently knowing the entity corresponding to the segmented region, and is also convenient and fast for determining the key region according to the name of the entity represented by the label.
More specifically, if face information is recognized from an entity for any entity in an image, it is determined whether the recognized face information is stored in a face library, where a correspondence relationship between the face information and a label is stored in the face library, if the recognized face information is stored in the face library, a label corresponding to the recognized face information may be directly acquired from the face library as a label of a divided region corresponding to the entity including the face information, if the recognized face information is not stored in the face library, a corresponding label is assigned to the recognized face information, and a correspondence relationship between the recognized face information and the assigned label is stored in the face library.
And 303, acquiring a key area in the image according to the label.
Specifically, the divided region and the label of the key point can be automatically identified by the machine learning algorithm, for example, when the divided region includes the divided region of "person 001" and the label of "person 001", the algorithm automatically identifies "person 001" as the content requiring the key point, and then identifies the divided region of "person 001" as the key point region in another examples, the key point region may be a region of interest that is selected by the user according to personal needs, and the practical application of the method for acquiring the key point region in the present embodiment will be described below by taking the image frame constituting the video as an example.
When a user watches a video through the electronic device, performing instance segmentation on video frames constituting the video, segmenting to obtain segmented regions of the video frames according to entities in the video frames, and allocating tags to the segmented regions, as shown in fig. 14; when an area concerned by a user appears in a video frame, the user can pause the video by interacting with a screen to be adapted, and after the pause operation of the user on the video is detected, a segmentation area obtained by segmenting the current video frame and a label allocated to the segmentation area are displayed on the screen to be adapted in a form shown in fig. 14 (wherein different segmentation areas can be displayed in different colors to generate obvious distinction among different segmentation areas), and the user can select the displayed segmentation area by interacting with the screen to be adapted. When the user's selection operation of the divided region of the "person 001" and the divided region of the "horse" is detected, the divided region of the "person 001" and the divided region of the "horse" are recorded as the important regions (for example, the important regions may be recorded as "person 001" and "horse").
After the key area is determined, the user can continue to play the video by interacting with the screen to be adapted; and taking the video frame with the selected key area as a starting frame, continuously zooming and stretching the video frame after the starting frame, and displaying the video formed by the zoomed and stretched video frame on a screen to be adapted, so that the video displayed in a full screen is viewed by a user and the key area is not distorted and deformed.
Compared with the prior art, the method and the device have the advantages that the image is segmented based on the example segmentation algorithm and the entity in the image to obtain the segmentation region with clear outline, the label is distributed to the segmentation region, the key region in the image is obtained according to the segmentation region and the label distributed to the segmentation region, the entity corresponding to the segmentation region can be clearly and conveniently obtained, and the key region can be rapidly obtained from the segmentation region according to the name of the entity represented by the label.
The above examples in the present embodiment are for convenience of understanding, and do not limit the technical aspects of the present invention.
The steps of the above methods are divided for clarity, and it is within the scope of this patent to combine steps or split some steps, and it is within the scope of this patent to divide into multiple steps, and it is within the scope of this patent to add insignificant modifications or introduce insignificant designs to the algorithms or processes, but not change the core design of the algorithms and processes.
A fourth embodiment of the invention relates to electronic devices, as shown in FIG. 15, comprising at least processors 401 and a memory 402 communicatively connected to at least processors 401, wherein the memory 402 stores instructions executable by at least processors 401, the instructions being executable by at least processors 401 to enable at least processors 401 to perform the above-mentioned image adaptation method.
Where memory 402 and processor 401 are coupled by a bus, the bus may comprise any number of interconnected buses and bridges that couple or more of the various circuits of processor 401 and memory 402 together at . the bus may also couple various other circuits such as peripherals, voltage regulators, power management circuits, etc., as are known in the art and, therefore, will not be further described herein.
The processor 401 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store data used by processor 401 in performing operations.
The fifth embodiment of the present invention relates to computer-readable storage media storing a computer program which, when executed by a processor, implements the above-described image adaptation method embodiments.
That is, those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by instructing the relevant hardware through a program, where the program is stored in storage media, and includes several instructions to make devices (which may be singlechips, chips, etc.) or processors (processors) execute all or part of the steps of the method according to the embodiments of the present application.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.
Claims (10)
- An image adaptation method of , comprising:scaling the image in equal proportion according to the size of the image and the size of a screen to be adapted;acquiring a stretching area in the zoomed image according to the key area in the image;stretching the stretching area according to the size of the image and the size of the screen to be adapted.
- 2. The image adaptation method according to claim 1, wherein the obtaining of the stretched region in the scaled image from the emphasized region in the image comprises:acquiring the minimum value c of the abscissa of the key area in the image after zoomingminMaximum value of abscissa cmaxThe minimum value r of the ordinateminMaximum value r of ordinatemax;Acquiring the aspect ratio of the image and acquiring the aspect ratio of the screen to be adapted;if the aspect ratio of the screen to be adapted is larger than or equal to the aspect ratio of the image, the abscissa in the zoomed image is smaller than cminArea and abscissa greater than cmaxAs the stretch region;if the aspect ratio of the screen to be adapted is smaller than that of the image, the vertical coordinate in the zoomed image is smaller than rminHas an area and ordinate greater than rmaxAs the stretch zone.
- 3. The image adaptation method according to claim 2, wherein the stretching region according to the size of the image and the size of the screen to be adapted comprises:if the aspect ratio of the screen to be adapted is larger than or equal to the aspect ratio of the image, horizontally stretching the stretching area according to a th stretching coefficient lambda, wherein lambda is calculated by the following formula:wherein w 'represents the width of the screen to be adapted, w represents the width of the image, h' represents the height of the screen to be adapted, and h represents the height of the image;if the aspect ratio of the screen to be adapted is smaller than that of the image, vertically stretching the stretching area according to a second stretching coefficient delta; wherein δ is calculated by the following formula:
- 4. an image adaptation method as claimed in claim 2, characterized in that, if the stretch area consists of the th sub-area and the second sub-area,said horizontally stretching said stretched zone according to a th stretch factor λ, comprising:according to the horizontal stretch coefficient of the th sub-areaHorizontally stretching the th sub-region according to the horizontal stretch factor of the second sub-regionHorizontally stretching the second subregion; wherein,andcalculated by the following formula:said vertically stretching said stretched zone according to a second stretch factor δ comprises:according to the vertical stretching coefficient of the th sub-areaVertically stretching the th sub-region, and depending on the vertical stretch factor of the second sub-regionVertically stretching the second subregion; wherein,andcalculated by the following formula:
- 5. The image adaptation method according to claim 1, wherein the region of interest in the image is obtained by:segmenting the image based on the entity in the image to obtain a segmentation region;and acquiring a key area in the image from the segmentation area.
- 6. The image adaptation method according to claim 5, wherein the obtaining of the key region in the image from the segmented region comprises:determining labels of corresponding segmentation areas according to entities in the images;and acquiring a key area in the image according to the label.
- 7. The image adaptation method according to claim 6, wherein the determining labels of the corresponding segmented regions according to the entities in the image comprises:for any of the entities, the entity,if the face information is identified from the arbitrary entity, judging whether the identified face information is stored in a face library, wherein the face library stores the corresponding relation between the face information and the label;if the label is stored in a face library, acquiring a label corresponding to the face information obtained by recognition from the face library, and taking the label as a label of a partition area corresponding to the arbitrary entity;and if the face information is not stored in the face library, distributing a corresponding label for the face information obtained by recognition, and storing the corresponding relation between the face information obtained by recognition and the distributed label into the face library.
- 8. The image adaptation method according to any of claims 2-4, wherein the obtaining of the abscissa minimum c of the emphasized region in the image in the scaled imageminMaximum value of abscissa cmaxThe minimum value r of the ordinateminMaximum value r of ordinatemaxThe method comprises the following steps:obtaining a matrix of the zoomed image through an example segmentation algorithm; wherein the value of the element in the matrix corresponding to the key region is greater than zero;wherein, the xr,cRepresenting the elements in the row R, column C of the matrix, the value of R being equal to the vertical dimension of the scaled image and the value of C being equal to the horizontal dimension of the scaled image.
- An electronic device of the type , comprising:at least processors, and,a memory communicatively coupled to the at least processors, wherein,the memory stores instructions executable by the at least processors to enable the at least processors to perform the image adaptation method of any of claims 1 to 8 .
- 10, computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image adaptation method according to any of claims 1 to 8 .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910801673.XA CN110738598A (en) | 2019-08-28 | 2019-08-28 | Image adaptation method, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910801673.XA CN110738598A (en) | 2019-08-28 | 2019-08-28 | Image adaptation method, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110738598A true CN110738598A (en) | 2020-01-31 |
Family
ID=69267736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910801673.XA Pending CN110738598A (en) | 2019-08-28 | 2019-08-28 | Image adaptation method, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738598A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112135183A (en) * | 2020-09-23 | 2020-12-25 | 湖南快乐阳光互动娱乐传媒有限公司 | Video playing method and system |
CN112750079A (en) * | 2020-12-29 | 2021-05-04 | 深圳市鸿合创新信息技术有限责任公司 | Image processing method and device and all-in-one machine |
CN114040100A (en) * | 2021-11-02 | 2022-02-11 | 上汽通用五菱汽车股份有限公司 | Vehicle-mounted camera display method, system and equipment based on dynamic adaptation |
WO2022063158A1 (en) * | 2020-09-27 | 2022-03-31 | 上海连尚网络科技有限公司 | Local screen adaptation method and device |
CN114281230A (en) * | 2021-12-15 | 2022-04-05 | 贵阳语玩科技有限公司 | Background picture generation method, device, medium and equipment suitable for different aspect ratios |
CN116828224A (en) * | 2023-08-28 | 2023-09-29 | 深圳有咖互动科技有限公司 | Real-time interaction method, device, equipment and medium based on interface gift icon |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7158158B1 (en) * | 2003-03-12 | 2007-01-02 | Apple Computer, Inc. | Method and apparatus for nonlinear anamorphic scaling of video images |
CN101093659A (en) * | 2006-06-20 | 2007-12-26 | 三星电子株式会社 | Apparatus and method for low distortion display in a portable communication terminal |
JP2010231282A (en) * | 2009-03-25 | 2010-10-14 | Sony Computer Entertainment Inc | Information processing apparatus and information processing method |
CN103530845A (en) * | 2013-10-19 | 2014-01-22 | 深圳市晶日盛科技有限公司 | Improved image zooming method |
CN103617599A (en) * | 2013-11-21 | 2014-03-05 | 北京工业大学 | Image inhomogeneous mapping method based on grid deformation optimization |
US20140205206A1 (en) * | 2013-01-24 | 2014-07-24 | Mayur Datar | Systems and methods for resizing an image |
US20150016747A1 (en) * | 2013-07-12 | 2015-01-15 | Vivotek Inc. | Image processor and image combination method thereof |
CN104461439A (en) * | 2014-12-29 | 2015-03-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
US20160065912A1 (en) * | 2012-05-22 | 2016-03-03 | Cognex Corporation | Machine vision systems and methods with predictive motion control |
US20160210768A1 (en) * | 2015-01-15 | 2016-07-21 | Qualcomm Incorporated | Text-based image resizing |
CN106204439A (en) * | 2016-06-28 | 2016-12-07 | 乐视控股(北京)有限公司 | The method and system of picture self-adaptive processing |
CN107958230A (en) * | 2017-12-22 | 2018-04-24 | 中国科学院深圳先进技术研究院 | Facial expression recognizing method and device |
CN108737882A (en) * | 2018-05-09 | 2018-11-02 | 腾讯科技(深圳)有限公司 | Display methods, device, storage medium and the electronic device of image |
CN108830787A (en) * | 2018-06-20 | 2018-11-16 | 北京微播视界科技有限公司 | The method, apparatus and electronic equipment of anamorphose |
CN109255752A (en) * | 2017-07-14 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Image adaptive compression method, device, terminal and storage medium |
CN109388311A (en) * | 2017-08-03 | 2019-02-26 | Tcl集团股份有限公司 | A kind of image display method, device and equipment |
CN109492023A (en) * | 2018-10-12 | 2019-03-19 | 咪咕文化科技有限公司 | Automobile information processing method and equipment and computer storage medium |
-
2019
- 2019-08-28 CN CN201910801673.XA patent/CN110738598A/en active Pending
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7158158B1 (en) * | 2003-03-12 | 2007-01-02 | Apple Computer, Inc. | Method and apparatus for nonlinear anamorphic scaling of video images |
CN101093659A (en) * | 2006-06-20 | 2007-12-26 | 三星电子株式会社 | Apparatus and method for low distortion display in a portable communication terminal |
JP2010231282A (en) * | 2009-03-25 | 2010-10-14 | Sony Computer Entertainment Inc | Information processing apparatus and information processing method |
US20160065912A1 (en) * | 2012-05-22 | 2016-03-03 | Cognex Corporation | Machine vision systems and methods with predictive motion control |
US20140205206A1 (en) * | 2013-01-24 | 2014-07-24 | Mayur Datar | Systems and methods for resizing an image |
US20150016747A1 (en) * | 2013-07-12 | 2015-01-15 | Vivotek Inc. | Image processor and image combination method thereof |
CN103530845A (en) * | 2013-10-19 | 2014-01-22 | 深圳市晶日盛科技有限公司 | Improved image zooming method |
CN103617599A (en) * | 2013-11-21 | 2014-03-05 | 北京工业大学 | Image inhomogeneous mapping method based on grid deformation optimization |
CN104461439A (en) * | 2014-12-29 | 2015-03-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
US20160210768A1 (en) * | 2015-01-15 | 2016-07-21 | Qualcomm Incorporated | Text-based image resizing |
CN106204439A (en) * | 2016-06-28 | 2016-12-07 | 乐视控股(北京)有限公司 | The method and system of picture self-adaptive processing |
CN109255752A (en) * | 2017-07-14 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Image adaptive compression method, device, terminal and storage medium |
CN109388311A (en) * | 2017-08-03 | 2019-02-26 | Tcl集团股份有限公司 | A kind of image display method, device and equipment |
CN107958230A (en) * | 2017-12-22 | 2018-04-24 | 中国科学院深圳先进技术研究院 | Facial expression recognizing method and device |
CN108737882A (en) * | 2018-05-09 | 2018-11-02 | 腾讯科技(深圳)有限公司 | Display methods, device, storage medium and the electronic device of image |
CN108830787A (en) * | 2018-06-20 | 2018-11-16 | 北京微播视界科技有限公司 | The method, apparatus and electronic equipment of anamorphose |
CN109492023A (en) * | 2018-10-12 | 2019-03-19 | 咪咕文化科技有限公司 | Automobile information processing method and equipment and computer storage medium |
Non-Patent Citations (2)
Title |
---|
张倩: "基于内容感知的多算子图像重定向研究" * |
曹连超: "基于内容的图像/视频重定向方法研究" * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112135183A (en) * | 2020-09-23 | 2020-12-25 | 湖南快乐阳光互动娱乐传媒有限公司 | Video playing method and system |
WO2022063158A1 (en) * | 2020-09-27 | 2022-03-31 | 上海连尚网络科技有限公司 | Local screen adaptation method and device |
CN112750079A (en) * | 2020-12-29 | 2021-05-04 | 深圳市鸿合创新信息技术有限责任公司 | Image processing method and device and all-in-one machine |
CN114040100A (en) * | 2021-11-02 | 2022-02-11 | 上汽通用五菱汽车股份有限公司 | Vehicle-mounted camera display method, system and equipment based on dynamic adaptation |
CN114281230A (en) * | 2021-12-15 | 2022-04-05 | 贵阳语玩科技有限公司 | Background picture generation method, device, medium and equipment suitable for different aspect ratios |
CN116828224A (en) * | 2023-08-28 | 2023-09-29 | 深圳有咖互动科技有限公司 | Real-time interaction method, device, equipment and medium based on interface gift icon |
CN116828224B (en) * | 2023-08-28 | 2023-11-24 | 深圳有咖互动科技有限公司 | Real-time interaction method, device, equipment and medium based on interface gift icon |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738598A (en) | Image adaptation method, electronic device and storage medium | |
KR102677022B1 (en) | Image processing apparatus and image processing method thereof | |
CN110163832B (en) | Face fusion method and device and terminal | |
CN106648511A (en) | Self-adaptive display method and device of display resolutions | |
JP7278766B2 (en) | Image processing device, image processing method and program | |
CN112135041B (en) | Method and device for processing special effect of human face and storage medium | |
JP4737269B2 (en) | Image processing apparatus and program | |
US20030113040A1 (en) | image database apparatus and method of controlling operation of same | |
KR20090132484A (en) | Information display apparatus, information displaying method, and computer readable medium | |
CN111814905A (en) | Target detection method, target detection device, computer equipment and storage medium | |
CN105894470A (en) | Image processing method and device | |
JP2014041433A (en) | Display device, display method, television receiver, and display control device | |
CN112752158A (en) | Video display method and device, electronic equipment and storage medium | |
CN115237522A (en) | Page self-adaptive display method and device | |
US9349038B2 (en) | Method and apparatus for estimating position of head, computer readable storage medium thereof | |
JP2008046608A (en) | Video window detector | |
CN111010605B (en) | Method for displaying video picture-in-picture window | |
JP7385416B2 (en) | Image processing device, image processing system, image processing method, and image processing program | |
JP2010244251A (en) | Image processor for detecting coordinate position for characteristic site of face | |
CN111339358A (en) | Movie recommendation method and device, computer equipment and storage medium | |
CN113132786A (en) | User interface display method and device and readable storage medium | |
CN112465931B (en) | Image text erasing method, related equipment and readable storage medium | |
CN114332297A (en) | Image drawing method and device, computer equipment and storage medium | |
CN111860492B (en) | License plate inclination correction method and device, computer equipment and storage medium | |
JP2000089747A (en) | Method and device for displaying image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200131 |