CN111083378A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN111083378A
CN111083378A CN201911403200.0A CN201911403200A CN111083378A CN 111083378 A CN111083378 A CN 111083378A CN 201911403200 A CN201911403200 A CN 201911403200A CN 111083378 A CN111083378 A CN 111083378A
Authority
CN
China
Prior art keywords
image
target object
position parameter
processing
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403200.0A
Other languages
Chinese (zh)
Inventor
张祎
贺跃理
高小菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911403200.0A priority Critical patent/CN111083378A/en
Publication of CN111083378A publication Critical patent/CN111083378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The present disclosure provides an image processing method, including: obtaining and processing a first image; if the first image comprises at least a first target object and a second target object, determining a first position parameter of the first target object and a second position parameter of the second target object; and processing the first image to obtain a second image different from the first image if the first position parameter and the second position parameter do not satisfy the condition; wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy a condition. The present disclosure also provides an image processing apparatus and an electronic device.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to an image processing method, an image processing apparatus, and an electronic device.
Background
With the rapid development of electronic technology, various electronic apparatuses having various functions, such as an image pickup function, are increasingly used in many scenes, such as life and work. However, images taken by the related art electronic devices are difficult to satisfy the user's demands. For example, when a plurality of target objects are present in a first image captured by an electronic apparatus of the related art, the heights of the plurality of target objects in the first image are not uniform, resulting in an overall layout of the first image that is not compact enough. In addition, when the user needs to perform cropping processing on the first image to obtain the areas where the plurality of target objects are located, the cropped image has incomplete individual target objects due to the height inconsistency of the plurality of target objects, for example, a partial area of the individual target object is cropped.
Disclosure of Invention
One aspect of the present disclosure provides an image processing method, including: obtaining and processing a first image, if the first image at least comprises a first target object and a second target object, determining a first position parameter of the first target object and a second position parameter of the second target object, and if the first position parameter and the second position parameter do not meet a condition, processing the first image to obtain a second image different from the first image, wherein a third position parameter of the first target object and a fourth position parameter of the second target object in the second image meet the condition.
Optionally, the condition that the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy at least one of the following conditions: an included angle between a connecting line of the first target object and the second target object in the second image and at least one first edge of the second image is smaller than an angle threshold, and a distance difference between the first target object and the second target object in the second image and the same first edge of the at least one first edge is smaller than a distance threshold.
Optionally, the processing the first image to obtain a second image different from the first image includes at least one of: and moving a first target object in the first image to obtain a second image, wherein the first position parameter is different from the third position parameter, the second position parameter is the same as the fourth position parameter, and moving a second target object in the first image to obtain a second image, wherein the second position parameter is different from the fourth position parameter, the first position parameter is the same as the third position parameter, and moving the first target object and the second target object in the first image to obtain the second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is different from the fourth position parameter.
Optionally, the method further includes: and obtaining a third image, and processing the third image to obtain the first image, wherein the first image at least comprises an edge part image of the third image.
Optionally, the processing the third image to obtain the first image includes: determining a fifth position parameter of the first target object in the third image and a sixth position parameter of the second target object in the third image, and processing the third image according to the fifth position parameter and the sixth position parameter to obtain the first image.
Optionally, the processing the first image to obtain a second image different from the first image includes: dividing the first image into a plurality of areas according to the first position parameter and the second position parameter, wherein the plurality of areas at least comprise a first area and a second area, the first area comprises the first target object, the second area comprises the second target object, and processing at least one of the first area and the second area to obtain the second image.
Optionally, the processing at least one of the first region and the second region to obtain the second image includes at least one of: and performing scaling processing on at least one of the first area and the second area, and performing translation processing on at least one of the first area and the second area.
Optionally, the processing at least one of the first region and the second region to obtain the second image includes: receiving a processing instruction of a user for at least one of the first area and the second area, wherein the processing instruction comprises at least one of a zooming instruction and a translating instruction, and processing at least one of the first area and the second area based on the processing instruction.
Another aspect of the present disclosure provides an image processing apparatus including: the device comprises an obtaining and processing module, a determining module and a processing module. The obtaining and processing module obtains and processes the first image. A determining module for determining a first position parameter of the first target object and a second position parameter of the second target object if the first image comprises at least the first target object and the second target object. And the processing module is used for processing the first image to obtain a second image different from the first image if the first position parameter and the second position parameter do not meet the condition. Wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy the condition.
Optionally, the condition that the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy at least one of the following conditions: an included angle between a connecting line of the first target object and the second target object in the second image and at least one first edge of the second image is smaller than an angle threshold, and a distance difference between the first target object and the second target object in the second image and the same first edge of the at least one first edge is smaller than a distance threshold.
Optionally, the processing the first image to obtain a second image different from the first image includes at least one of: and moving a first target object in the first image to obtain a second image, wherein the first position parameter is different from the third position parameter, the second position parameter is the same as the fourth position parameter, and moving a second target object in the first image to obtain a second image, wherein the second position parameter is different from the fourth position parameter, the first position parameter is the same as the third position parameter, and moving the first target object and the second target object in the first image to obtain the second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is different from the fourth position parameter.
Optionally, the apparatus further comprises: the device comprises an obtaining module and a processing module. The obtaining module obtains a third image. And the processing module is used for processing the third image to obtain the first image, wherein the first image at least comprises an edge part image of the third image.
Optionally, the processing the third image to obtain the first image includes: determining a fifth position parameter of the first target object in the third image and a sixth position parameter of the second target object in the third image, and processing the third image according to the fifth position parameter and the sixth position parameter to obtain the first image.
Optionally, the processing the first image to obtain a second image different from the first image includes: dividing the first image into a plurality of areas according to the first position parameter and the second position parameter, wherein the plurality of areas at least comprise a first area and a second area, the first area comprises the first target object, the second area comprises the second target object, and processing at least one of the first area and the second area to obtain the second image.
Optionally, the processing at least one of the first region and the second region to obtain the second image includes at least one of: and performing scaling processing on at least one of the first area and the second area, and performing translation processing on at least one of the first area and the second area.
Optionally, the processing at least one of the first region and the second region to obtain the second image includes: receiving a processing instruction of a user for at least one of the first area and the second area, wherein the processing instruction comprises at least one of a zooming instruction and a translating instruction, and processing at least one of the first area and the second area based on the processing instruction.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as above.
Another aspect of the disclosure provides a non-transitory readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure;
fig. 2 schematically shows a schematic diagram of an image processing method according to a first embodiment of the present disclosure;
fig. 3 schematically shows a schematic diagram of an image processing method according to a second embodiment of the present disclosure;
fig. 4 schematically shows a schematic diagram of an image processing method according to a third embodiment of the present disclosure;
FIG. 5 schematically shows a schematic diagram of a scaling process according to an embodiment of the present disclosure;
FIG. 6 schematically shows a schematic diagram of a translation process according to an embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
FIG. 8 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus to produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
An embodiment of the present disclosure provides an image processing method, including: obtaining and processing a first image, if the first image at least comprises a first target object and a second target object, determining a first position parameter of the first target object and a second position parameter of the second target object, and if the first position parameter and the second position parameter do not meet a condition, processing the first image to obtain a second image different from the first image. Wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy a condition.
Fig. 1 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing method includes operations S110 to S130, for example.
In operation S110, a first image is obtained and processed.
According to an embodiment of the present disclosure, processing the first image may include, for example, identifying whether a target object, which may be a user, is in the first image.
In operation S120, if the first image includes at least a first target object and a second target object, a first position parameter of the first target object and a second position parameter of the second target object are determined.
For example, if the recognition result indicates that the first target object and the second target object are included in the first image, a first position parameter of the first target object in the first image and a second position parameter of the second target object in the first image may be further determined.
In operation S130, if the first position parameter and the second position parameter do not satisfy the condition, the first image is processed to obtain a second image different from the first image. Wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy a condition.
According to an embodiment of the present disclosure, for example, the positive direction of the first image may be determined first. The positive direction of the first image may be determined, for example, from the poses of the first target object and the second target object in the first image. For example, if the first target object and the second target object are users, it may be determined that the direction in which the body of the user is located is the positive direction of the first image, and the direction in which the body of the user is located may be, for example, the direction in which a line between the head of the user and any body part below the head of the user is located.
According to an embodiment of the present disclosure, the first and second position parameters not satisfying the condition may, for example, represent that the difference in height of the first and second target objects in the positive direction of the first image is large. That is, the first target object and the second target object are not at the same height in the first image. For example, in one case, the pose of the first target object is standing and the pose of the second target object is sitting, resulting in a large height difference between the first target object and the second target object.
The first image may be processed to obtain a second image if the first position parameter and the second position parameter do not satisfy the condition. For example, at least one of the first target object and the second target object may be processed, and the obtained position information of the first target object and the second target object in the second image satisfies the condition.
According to an embodiment of the present disclosure, the third and fourth position parameters satisfying the condition may, for example, represent that the difference in height of the first and second target objects in the positive direction of the first image is small. That is, the first target object and the second target object are at approximately the same height in the second image.
It can be understood that the second image is obtained by processing the first image, so that the position parameters of the first target object and the second target object in the second image satisfy the condition, the height difference of the first target object and the second target object in the second image is smaller, the heights of the plurality of target objects in the second image are kept consistent, and the overall layout of the second image is more compact.
Fig. 2 schematically shows a schematic diagram of an image processing method according to a first embodiment of the present disclosure.
As shown in fig. 2, the first image 210 includes, for example, a first target object and a second target object. The positive direction of the first image 210 is, for example, direction a. The horizontal edge of the first image 210 is for example perpendicular to the direction a. The horizontal edges include, for example, an upper edge and a lower edge. The first edge shown in fig. 2 is, for example, a horizontal edge.
It can be seen that the difference in height of the first target object and the second target object in the direction a of the first image 210 is relatively large, for example, the included angle α between the connecting line between the first target object and the second target object and the at least one first edge1Is relatively large. Wherein the at least one first edge may be, for example, an upper edge or a lower edge.
According to an embodiment of the disclosure, the second image 220 is obtained by processing the first image 210, for example, the third position parameter of the first target object and the fourth position parameter of the second target object in the second image 220 satisfy a condition including, for example, an included angle α between a line connecting the first target object and the second target object in the second image 220 and at least one first edge of the second image 2202Less than the angle threshold. The angle threshold may be specifically set according to the actual application, and the angle threshold may be set to 5 °, 10 °, or the like, for example.
According to the embodiment of the disclosure, since the size of the included angle between the connecting line of the first target object and the second target object and the at least one first edge of the second image can represent the height difference between the first target object and the second target object, the second image with the included angle smaller than the angle threshold value can be obtained by processing the first image based on the included angle. In addition, the included angle can accurately represent whether the first target object and the second target object are at the same height, so that the second image obtained by processing the image by taking the included angle as a reference is more accurate.
Fig. 3 schematically shows a schematic diagram of an image processing method according to a second embodiment of the present disclosure.
As shown in fig. 3, the first image 310 is processed, for example, to obtain a second image 320. In the first image, the height difference between the first target object and the second target object in the direction a of the first image 310 is large, for example, the distance from the first target object to the at least one first edge is a first distance, and the distance from the second target object to the at least one first edge is a second distance. A distance difference H between the first distance and the second distance1Larger, the first position parameter representing the first target object in the first image 310 and the second position parameter representing the second target object do not satisfy the condition.
Wherein the condition that the third position parameter of the first target object and the fourth position parameter of the second target object in the second image 320 satisfy includes: the first target object and the second target object in the second image 320 are at a distance difference H from the same one of the at least one first edge2Less than the distance threshold. Wherein the same first edge is for example an upper edge or a lower edge.
For example, the distance between the first target object and the at least one first edge in the second image 320 is a first distance, the distance between the second target object and the at least one first edge is a second distance, and the distance difference H between the first distance and the second distance2Less than the distance threshold. The distance threshold may be specifically set according to the actual application, and the distance threshold may be set to 5 mm, 10 mm, or the like, for example.
According to the embodiment of the present disclosure, since the distance difference between the first target object and the second target object to the at least one first edge can characterize the height difference between the first target object and the second target object, the first image may be processed based on the distance difference to obtain the second image having the distance difference smaller than the distance threshold. In addition, the distance difference can relatively accurately indicate whether the first target object and the second target object are at the same height, so that the second image obtained by processing the image by taking the distance difference as a reference is more accurate.
According to the embodiment of the present disclosure, since the height of the first image in the direction a in fig. 2 or 3 is large. Typically, the first image may be, for example, a preview image acquired by the electronic device. However, the image that the electronic device finally needs to save is, for example, the target image, and for example, the electronic device needs to save the target image with the height H. Therefore, if a portion with a height H in the first image is directly cut out as the target image, a plurality of target objects in the target image will be incomplete, for example, the heads or other portions of the plurality of target objects are cut out. Therefore, the disclosed embodiment may, for example, first process the first image to obtain the second image, and since the heights of the target objects in the second image in the direction a are relatively consistent, the disclosed embodiment may intercept a portion with a height H in the second image as the target image. For example, the target image in the embodiment of FIG. 2 may be image 230. The target object in the embodiment of fig. 3 may be an image 330.
According to an embodiment of the present disclosure, processing the first image to obtain the second image different from the first image may include at least one of the following ways, for example.
In the first way, for example, a first target object in a first image is moved to obtain a second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is the same as the fourth position parameter. For example, the first target object is moved in the positive direction of the first image with reference to the position of the second target object, so that the heights of the first target object and the second target object in the second image are matched. Wherein the third position parameter of the first target object in the second image is different from the first position parameter of the first target object in the first image. In addition, since the second target object is not moved, the fourth position parameter of the second target object in the second image is the same as the second position parameter of the second target object in the first image.
In a second way, for example, a second target object in the first image is moved to obtain a second image, wherein the second position parameter is different from the fourth position parameter, and the first position parameter is the same as the third position parameter. For example, the second target object is moved in the positive direction of the first image with reference to the position of the first target object, so that the heights of the first target object and the second target object in the second image are matched. Wherein the fourth position parameter of the second target object in the second image is different from the second position parameter of the second target object in the first image. In addition, since the first target object is not moved, the third position parameter of the first target object in the second image is the same as the first position parameter of the first target object in the first image.
In a third method, for example, a first target object and a second target object in the first image are moved to obtain a second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is different from the fourth position parameter. For example, with the intermediate position of the first image as a reference, the first target object and the second target object are moved in the positive direction of the first image to be close to the intermediate position of the first image so that the first target object and the second target object in the second image are at the intermediate position and keep the heights uniform.
In another case, for example, the first image may be identified for the reference target object. Wherein the reference target object may be the first target object, the second target object, or an object other than the first target object and the second target object. Then, at least one of the first target object and the second target object may be moved to a height at which the reference target object is located along a positive direction of the image with reference to the reference target object, so that the moved at least one of the first target object and the second target object is consistent with the height of the reference image.
Fig. 4 schematically shows a schematic diagram of an image processing method according to a third embodiment of the present disclosure.
As shown in fig. 4, the first image 430 of the disclosed embodiment is obtained, for example, by a panoramic effect camera, which includes, for example, a 360 degree camera. Specifically, the panoramic effect camera, for example, first acquires the third image 410, and processes the third image 410 to obtain the first image 430. . The third image 410 includes, for example, a plurality of target objects including at least a first target object and a second target object. The first image includes at least an edge portion image 420 of the third image 410.
Specifically, the third image 410 obtained by the panoramic effect camera is, for example, a circular panoramic image, and processing the third image 410 can obtain an edge portion image 420 of the third image 410, wherein the edge portion image 420 is, for example, a ring-shaped image, and the edge portion image 420 at least includes a first target object and a second target object. Since the distortion of the middle portion of the third image 410 acquired by the panoramic effect camera is large, the third image 410 may be cut to obtain an edge portion image 420, and then the edge portion image 420 may be cut and unfolded to obtain the first image 430.
According to the embodiment of the present disclosure, processing the third image 410 to obtain the first image 430 specifically includes, for example: a fifth position parameter of the first target object in the third image 410 and a sixth position parameter of the second target object in the third image 410 are determined. Then, the third image 410 is processed to obtain the first image 430 according to the fifth position parameter and the sixth position parameter.
That is, when the third image 410 is cropped to obtain the edge portion image 420, the position information of the first target object and the second target object in the third image 410 may be determined first, so that the cropping may be performed according to the position information of the first target object and the second target object, so that the first target object or the second target object may not be cropped when the middle area with larger distortion in the third image 410 is cropped. Thereby leaving the first target object and the second target object intact in the edge portion image 420.
It will be appreciated that the third image includes regions of greater distortion. Therefore, the third image can be cut to obtain the first image, so that the distortion degree of the first image is small. In addition, the first image is obtained by cutting according to the position information of the first target object and the second target object in the third image, the integrity of the first target object and the second target object in the first image can be at least ensured, and the distortion area in the third image can be at least removed to a greater extent, so that the first image better meets the requirements of users.
When at least one of the first target object and the second target object in the first image is processed, for example, contour information of the first target object or the second target object needs to be determined. After determining the contour information of the target object, it may be determined that the region within the contour is the region where the target object is located, and the region within the contour is processed.
In addition to determining contour information of the first target object and the second target object to process at least one of the first target object and the second target object, another implementation of the present disclosure may divide the first image into a plurality of regions and process at least one of the plurality of regions. It will be appreciated that the manner in which the regions are processed need not determine contour information of the target object. The manner in which the regions are processed is described below, for example.
For example, processing a first image to obtain a second image different from the first image comprises: and dividing the first image into a plurality of areas according to the first position parameter and the second position parameter, and then processing at least one of the first area and the second area to obtain a second image. The dividing rule for dividing the first image into the plurality of areas may be, for example, divided according to the heights of the plurality of target objects in the first image in the positive direction of the first image, for example, the areas of the plurality of target objects in the first image, which are similar in height in the positive direction of the first image, are divided into the same area.
According to an embodiment of the present disclosure, the plurality of regions includes, for example, at least a first region including a first target object and a second region including a second target object. For example, the second image may be obtained by performing scaling processing or translation processing on at least one of the first region and the second region. The specific process of the scaling process is described as fig. 5, for example, and the specific process of the panning process is described as fig. 6, for example.
Fig. 5 schematically shows a schematic diagram of a scaling process according to an embodiment of the present disclosure.
As shown in fig. 5, the first image 510 includes, for example, a first region 511 and a second region 512. At least one of the first region 511 and the second region 512 is scaled to obtain a second image 520. For example, fig. 5 shows that scaling the second area 512 results in the second image 520, e.g., reducing the second area 512. The second image 520 includes, for example, a first region 511 and a scaled second region 512. It can be seen that the height of the scaled second region 512 is more consistent with the height of the first region 511. That is, the height of the second target object in the scaled second region 512 is more consistent with the height of the first target object in the first region 511. The scaling direction is, for example, the direction B.
Fig. 6 schematically shows a schematic diagram of a translation process according to an embodiment of the present disclosure.
As shown in fig. 6, the first image 610 includes, for example, a first region 611 and a second region 612. The second image 620 is obtained by performing a translation process on at least one of the first region and the second region. For example, fig. 6 shows that the second region 612 is translated to obtain a second image 620. The second image 620 includes, for example, the first region 611 and the translated second region 612. It can be seen that the translated second region 612 is more consistent in height with the first region 611. That is, the heights of the second target object in the translated second region 612 and the first target object in the first region 611 are more consistent. The direction of translation is, for example, direction C.
It is to be understood that in another embodiment, the second image may be obtained by performing scaling processing and translation processing on at least one of the first region and the second region. The specific process is, for example, a combination of the embodiment shown in fig. 5 and the embodiment shown in fig. 6, and is not described herein again.
According to the embodiment of the disclosure, the first image is divided into the plurality of areas and then processed to obtain the second image, so that the efficiency of image processing is improved. For example, without determining the contour information of each target object to be processed, the region in which a plurality of highly similar target objects in the first image are located may be directly divided into one region, and the region may be scaled or translated, so as to implement one-time processing of the plurality of target objects in the region, thereby improving the processing efficiency of the image.
According to the embodiment of the disclosure, when at least one of the plurality of regions needs to be scaled or translated after the first image is divided into the plurality of regions, for example, the electronic device needs to determine the reference information, so as to facilitate scaling or translating of the at least one region based on the reference information. The reference information includes, for example, a reference area of the plurality of areas or a reference target object of the plurality of target objects, and the like. After the reference information is determined, at least one region may be scaled or translated so that the processed region coincides with the height of the reference information.
Alternatively, in another embodiment, the reference information may not need to be determined. Specifically, the at least one region may be scaled or translated according to the user's needs. For example, a processing instruction of a user for at least one of the first area and the second area may be received, and then at least one of the first area and the second area is processed based on the processing instruction. Wherein the processing instruction comprises at least one of a zoom instruction and a pan instruction. That is, the user may determine which region needs to be scaled or translated, and the user may determine a specific scaling amount or translation amount according to an actual requirement, so that the processed region meets a height requirement of the user for each target object in the second image. It can be understood that the first image is processed based on the processing instruction of the user, so that the processed second image meets the requirements of the user better, and the satisfaction degree of the user on the image processing result is improved.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the methods shown in FIGS. 1-6.
Fig. 7 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the image processing apparatus 700 includes an obtaining and processing module 710, a determining module 720, and a processing module 730.
The acquisition and processing module 710 may be used to acquire and process the first image. According to the embodiment of the present disclosure, the obtaining and processing module 710 may, for example, perform the operation S110 described above with reference to fig. 1, which is not described herein again.
The determining module 720 may be configured to determine a first position parameter of the first target object and a second position parameter of the second target object when the first image comprises at least the first target object and the second target object. According to an embodiment of the present disclosure, the determining module 720 may perform, for example, the operation S120 described above with reference to fig. 1, which is not described herein again.
The processing module 730 may be configured to process the first image to obtain a second image different from the first image when the first position parameter and the second position parameter do not satisfy the condition. Wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy a condition. According to the embodiment of the present disclosure, the processing module 730 may, for example, perform the operation S130 described above with reference to fig. 1, which is not described herein again.
According to an embodiment of the disclosure, the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfying the condition includes at least one of: an included angle between a connecting line of the first target object and the second target object in the second image and at least one first edge of the second image is smaller than an angle threshold, and a distance difference between the first target object and the second target object in the second image from the same first edge of the at least one first edge is smaller than a distance threshold.
According to an embodiment of the disclosure, processing the first image, resulting in a second image different from the first image comprises at least one of: and moving the first target object in the first image to obtain a second image, wherein the first position parameter is different from the third position parameter, the second position parameter is the same as the fourth position parameter, and moving the second target object in the first image to obtain a second image, wherein the second position parameter is different from the fourth position parameter, the first position parameter is the same as the third position parameter, and the first target object and the second target object in the first image are moved to obtain a second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is different from the fourth position parameter.
According to an embodiment of the present disclosure, the apparatus 700 may further include, for example: the device comprises an obtaining module and a processing module. The obtaining module is used for obtaining a third image. The processing module is used for processing the third image to obtain a first image, wherein the first image at least comprises an edge partial image of the third image.
According to an embodiment of the present disclosure, processing the third image to obtain the first image includes: and determining a fifth position parameter of the first target object in the third image and a sixth position parameter of the second target object in the third image, and processing the third image according to the fifth position parameter and the sixth position parameter to obtain the first image.
According to an embodiment of the present disclosure, processing the first image to obtain a second image different from the first image includes: and dividing the first image into a plurality of areas according to the first position parameter and the second position parameter, wherein the plurality of areas at least comprise a first area and a second area, the first area comprises a first target object, the second area comprises a second target object, and processing at least one of the first area and the second area to obtain a second image.
According to an embodiment of the disclosure, processing at least one of the first region and the second region, resulting in the second image comprises at least one of: at least one of the first region and the second region is subjected to scaling processing, and at least one of the first region and the second region is subjected to translation processing.
According to an embodiment of the present disclosure, processing at least one of the first region and the second region, and obtaining the second image includes: receiving a processing instruction of a user for at least one of the first area and the second area, wherein the processing instruction comprises at least one of a zooming instruction and a translating instruction, and processing at least one of the first area and the second area based on the processing instruction.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the obtaining and processing module 710, the determining module 720, and the processing module 730 may be combined in one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining and processing module 710, the determining module 720, and the processing module 730 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the obtaining and processing module 710, the determining module 720 and the processing module 730 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
FIG. 8 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 8 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 8, a computer system 800 implementing image processing includes a processor 801, a computer-readable storage medium 802. The system 800 may perform a method according to an embodiment of the present disclosure.
In particular, the processor 801 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 801 may also include onboard memory for caching purposes. The processor 801 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 802 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 802 may include a computer program 803, which computer program 803 may include code/computer-executable instructions that, when executed by the processor 801, cause the processor 801 to perform a method according to an embodiment of the present disclosure, or any variant thereof.
The computer program 803 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 803 may include one or more program modules, including for example 803A, module 803B, … …. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 801 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 801.
According to an embodiment of the present disclosure, at least one of the obtaining and processing module 710, the determining module 720 and the processing module 730 may be implemented as a computer program module as described with reference to fig. 8, which when executed by the processor 801 may implement the respective operations described above.
The present disclosure also provides a computer-readable medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The above-mentioned computer-readable medium carries one or more programs which, when executed, implement the above-mentioned image processing method.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An image processing method comprising:
obtaining and processing a first image;
determining a first position parameter of the first target object and a second position parameter of a second target object if the first image comprises at least the first target object and the second target object; and
processing the first image to obtain a second image different from the first image if the first position parameter and the second position parameter do not meet a condition;
wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy the condition.
2. The method of claim 1, wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfying the condition comprises at least one of:
an included angle between a connecting line of the first target object and the second target object in the second image and at least one first edge of the second image is smaller than an angle threshold; and
the difference in distance between the first target object and the second target object in the second image from the same first edge of the at least one first edge is less than a distance threshold.
3. The method of claim 1 or 2, wherein the processing the first image to obtain a second image different from the first image comprises at least one of:
moving a first target object in the first image to obtain a second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is the same as the fourth position parameter;
moving a second target object in the first image to obtain a second image, wherein the second position parameter is different from the fourth position parameter, and the first position parameter is the same as the third position parameter; and
and moving a first target object and a second target object in the first image to obtain a second image, wherein the first position parameter is different from the third position parameter, and the second position parameter is different from the fourth position parameter.
4. The method of claim 1, further comprising:
obtaining a third image; and
and processing the third image to obtain the first image, wherein the first image at least comprises an edge part image of the third image.
5. The method of claim 4, wherein the processing the third image to obtain the first image comprises:
determining a fifth position parameter of the first target object in the third image and a sixth position parameter of the second target object in the third image; and
and processing the third image according to the fifth position parameter and the sixth position parameter to obtain the first image.
6. The method of claim 1, wherein the processing the first image to obtain a second image different from the first image comprises:
dividing the first image into a plurality of regions according to the first position parameter and the second position parameter, wherein the plurality of regions at least include a first region and a second region, the first region includes the first target object, and the second region includes the second target object; and
and processing at least one of the first area and the second area to obtain the second image.
7. The method of claim 6, wherein the processing at least one of the first region and the second region to obtain the second image comprises at least one of:
scaling at least one of the first region and the second region; and
performing a translation process on at least one of the first region and the second region.
8. The method of claim 6, wherein the processing at least one of the first region and the second region to obtain the second image comprises:
receiving a processing instruction of a user for at least one of the first area and the second area, wherein the processing instruction comprises at least one of a zooming instruction and a translating instruction; and
processing at least one of the first region and the second region based on the processing instruction.
9. An image processing apparatus comprising:
the acquisition and processing module acquires and processes a first image;
a determining module for determining a first position parameter of the first target object and a second position parameter of the second target object if the first image comprises at least the first target object and the second target object; and
the processing module is used for processing the first image to obtain a second image different from the first image if the first position parameter and the second position parameter do not meet the condition;
wherein the third position parameter of the first target object and the fourth position parameter of the second target object in the second image satisfy the condition.
10. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
CN201911403200.0A 2019-12-30 2019-12-30 Image processing method and device and electronic equipment Pending CN111083378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403200.0A CN111083378A (en) 2019-12-30 2019-12-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403200.0A CN111083378A (en) 2019-12-30 2019-12-30 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111083378A true CN111083378A (en) 2020-04-28

Family

ID=70320172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403200.0A Pending CN111083378A (en) 2019-12-30 2019-12-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111083378A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000201273A (en) * 1998-11-06 2000-07-18 Seiko Epson Corp Medium storing image data generation program, image data generation device and image data generating method
CN101128793A (en) * 2005-03-07 2008-02-20 科乐美数码娱乐株式会社 Information processing device, image movement instructing method, and information storage medium
CN108833784A (en) * 2018-06-26 2018-11-16 Oppo(重庆)智能科技有限公司 A kind of adaptive patterning process, mobile terminal and computer readable storage medium
CN109151322A (en) * 2018-09-30 2019-01-04 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN109413327A (en) * 2018-09-30 2019-03-01 联想(北京)有限公司 A kind of control method and multimedia device
CN109767397A (en) * 2019-01-09 2019-05-17 三星电子(中国)研发中心 A kind of image optimization method and system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000201273A (en) * 1998-11-06 2000-07-18 Seiko Epson Corp Medium storing image data generation program, image data generation device and image data generating method
CN101128793A (en) * 2005-03-07 2008-02-20 科乐美数码娱乐株式会社 Information processing device, image movement instructing method, and information storage medium
CN108833784A (en) * 2018-06-26 2018-11-16 Oppo(重庆)智能科技有限公司 A kind of adaptive patterning process, mobile terminal and computer readable storage medium
CN109151322A (en) * 2018-09-30 2019-01-04 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN109413327A (en) * 2018-09-30 2019-03-01 联想(北京)有限公司 A kind of control method and multimedia device
CN109767397A (en) * 2019-01-09 2019-05-17 三星电子(中国)研发中心 A kind of image optimization method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US11538232B2 (en) Tracker assisted image capture
CN111914692B (en) Method and device for acquiring damage assessment image of vehicle
TWI539813B (en) Image composition apparatus and method
CN107368776B (en) Vehicle loss assessment image acquisition method and device, server and terminal equipment
US9602796B2 (en) Technologies for improving the accuracy of depth cameras
KR102356448B1 (en) Method for composing image and electronic device thereof
US10620826B2 (en) Object selection based on region of interest fusion
US10674066B2 (en) Method for processing image and electronic apparatus therefor
CN104301596A (en) Video processing method and device
JP2018072957A (en) Image processing method, image processing system and program
CN108090486B (en) Image processing method and device in billiard game
CN110147465A (en) Image processing method, device, equipment and medium
CN105701762B (en) Picture processing method and electronic equipment
CN109829447B (en) Method and device for determining a three-dimensional frame of a vehicle
JP2017515188A (en) Method and device for processing pictures
US20200294308A1 (en) Information processing apparatus and accumulated images selecting method
CN110991385A (en) Method and device for identifying ship driving track and electronic equipment
CN109791703B (en) Generating three-dimensional user experience based on two-dimensional media content
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN111083378A (en) Image processing method and device and electronic equipment
US20120113094A1 (en) Image processing apparatus, image processing method, and computer program product thereof
EP2919450B1 (en) A method and a guided imaging unit for guiding a user to capture an image
CN110555833B (en) Image processing method, image processing apparatus, electronic device, and medium
CN110418059B (en) Image processing method and device applied to electronic equipment, electronic equipment and medium
CN115511870A (en) Object detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication