CN110572578A - Image processing method, apparatus, computing device, and medium - Google Patents

Image processing method, apparatus, computing device, and medium Download PDF

Info

Publication number
CN110572578A
CN110572578A CN201910939346.0A CN201910939346A CN110572578A CN 110572578 A CN110572578 A CN 110572578A CN 201910939346 A CN201910939346 A CN 201910939346A CN 110572578 A CN110572578 A CN 110572578A
Authority
CN
China
Prior art keywords
image
processing
images
processed
specific object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910939346.0A
Other languages
Chinese (zh)
Inventor
高小菊
贺跃理
张祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910939346.0A priority Critical patent/CN110572578A/en
Publication of CN110572578A publication Critical patent/CN110572578A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image processing method, including: acquiring a first image; processing the first image to obtain a plurality of second images; processing at least one second image in the plurality of second images according to the deformation information of the specific object of each second image in the plurality of second images to obtain at least one processed second image; and obtaining a third image based on the at least one processed second image and the plurality of second images, wherein the deformation amount of the specific object of the third image is smaller than that of the specific object of the first image. The present disclosure also provides an image processing apparatus, a computing device, and a computer-readable storage medium.

Description

Image processing method, apparatus, computing device, and medium
Technical Field
The present disclosure relates to an image processing method, an image processing apparatus, a computing device, and a computer-readable storage medium.
background
In many scenarios, a panoramic image needs to be acquired, for example, by a panoramic effect camera. However, the panoramic image obtained by the framing method has serious distortion, and if the distorted image is directly output to the user for watching, the watching experience of the user is reduced. Therefore, how to process the distorted image to reduce the distortion degree of the image becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
one aspect of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining a first image, processing the first image to obtain a plurality of second images, processing at least one of the second images according to deformation information of a specific object of each of the second images to obtain at least one processed second image, and obtaining a third image based on the at least one processed second image and the second images, wherein the deformation amount of the specific object of the third image is smaller than that of the specific object of the first image.
Optionally, the processing at least one of the plurality of second images according to the deformation information of the specific object in each of the plurality of second images includes: determining the at least one second image needing to be processed according to the deformation information of the specific object of each second image in the plurality of second images, respectively determining a processing mode aiming at each second image in the at least one second image according to the deformation information of the specific object of each second image in the at least one second image, and respectively processing each second image in the at least one second image based on the determined processing mode.
optionally, the determining, according to the deformation information of the specific object of each of the plurality of second images, the at least one second image that needs to be processed includes: determining an amount of deformation of a particular object of a current second image of the plurality of second images, and in response to determining that the amount of deformation is greater than a particular threshold, determining the current second image to be the at least one second image that needs to be processed.
Optionally, the acquiring the first image includes: and acquiring a fourth image, and performing selection processing on the fourth image to obtain the first image, wherein the first image comprises an annular partial image of the fourth image.
Optionally, the selecting the fourth image to obtain the first image includes: and determining a selection angle, and performing selection processing on the fourth image based on the selection angle to obtain the first image.
optionally, the processing the first image to obtain a plurality of second images includes: processing the first image to obtain a plurality of second images according to a processing strategy, wherein the processing strategy comprises at least one of the following: the image processing method comprises an equal proportion cutting strategy and a priority strategy, wherein the priority strategy is a strategy made according to user information in the first image.
Optionally, the plurality of second images include the at least one processed second image and at least one unprocessed second image. Obtaining a third image based on the at least one processed second image and the plurality of second images comprises: obtaining the third image based on the at least one processed second image and the at least one unprocessed second image.
Another aspect of the present disclosure provides an image processing apparatus including: the device comprises a first acquisition module, a first processing module, a second processing module and a second acquisition module. The first acquisition module acquires a first image. And the first processing module is used for processing the first image to obtain a plurality of second images. And the second processing module is used for processing at least one second image in the plurality of second images according to the deformation information of the specific object of each second image in the plurality of second images to obtain at least one processed second image. And the second acquisition module is used for obtaining a third image based on the at least one processed second image and the plurality of second images, wherein the deformation quantity of the specific object of the third image is smaller than that of the specific object of the first image.
Optionally, the processing at least one of the plurality of second images according to the deformation information of the specific object in each of the plurality of second images includes: determining the at least one second image needing to be processed according to the deformation information of the specific object of each second image in the plurality of second images, respectively determining a processing mode aiming at each second image in the at least one second image according to the deformation information of the specific object of each second image in the at least one second image, and respectively processing each second image in the at least one second image based on the determined processing mode.
Optionally, the determining, according to the deformation information of the specific object of each of the plurality of second images, the at least one second image that needs to be processed includes: determining an amount of deformation of a particular object of a current second image of the plurality of second images, and in response to determining that the amount of deformation is greater than a particular threshold, determining the current second image to be the at least one second image that needs to be processed.
Optionally, the acquiring the first image includes: and acquiring a fourth image, and performing selection processing on the fourth image to obtain the first image, wherein the first image comprises an annular partial image of the fourth image.
Optionally, the selecting the fourth image to obtain the first image includes: and determining a selection angle, and performing selection processing on the fourth image based on the selection angle to obtain the first image.
Optionally, the processing the first image to obtain a plurality of second images includes: processing the first image to obtain a plurality of second images according to a processing strategy, wherein the processing strategy comprises at least one of the following: the image processing method comprises an equal proportion cutting strategy and a priority strategy, wherein the priority strategy is a strategy made according to user information in the first image.
Optionally, the plurality of second images include the at least one processed second image and at least one unprocessed second image. Obtaining a third image based on the at least one processed second image and the plurality of second images comprises: obtaining the third image based on the at least one processed second image and the at least one unprocessed second image.
Another aspect of the disclosure provides a computing device comprising: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as above.
another aspect of the disclosure provides a non-transitory readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Fig. 1 schematically shows an application scenario of an image processing method according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure;
FIGS. 3 and 4A-4C schematically illustrate a schematic diagram of an image processing method according to an embodiment of the disclosure;
Fig. 5 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
FIG. 6 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus to produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
An embodiment of the present disclosure provides an image processing method, including: and acquiring a first image, and processing the first image to obtain a plurality of second images. Then, at least one second image in the plurality of second images is processed according to the deformation information of the specific object of each second image in the plurality of second images, and at least one processed second image is obtained. And finally, obtaining a third image based on the at least one processed second image and the plurality of second images, wherein the deformation amount of the specific object of the third image is smaller than that of the specific object of the first image.
Fig. 1 schematically shows an application scenario of an image processing method according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the application scene 100 may include, for example, a panoramic image acquired by a panoramic effect camera. Wherein, the panoramic effect camera can be a 360 degree camera.
the panoramic image is distorted to a large extent, for example, which affects the viewing experience of the user. Therefore, the disclosed embodiments aim to process the distorted image to reduce the distortion degree and improve the viewing experience of the user.
An image processing method according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2, 3, and 4A to 4C in conjunction with the application scenario of fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Fig. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, a first image is acquired.
according to an embodiment of the present disclosure, the first image may be, for example, a panoramic image acquired by a panoramic effect camera, the first image being a distorted image.
In operation S220, the first image is processed to obtain a plurality of second images.
according to a disclosed embodiment, for example, the first image is cut into a plurality of second images. Wherein the first image may be processed according to a processing strategy to obtain a plurality of second images. The processing policy includes, for example, a plurality of different policies.
According to an embodiment of the present disclosure, one of the processing strategies is, for example, an equal-scale cutting strategy, i.e., uniformly cutting the first image into a plurality of second images. Another processing strategy is, for example, an unequal ratio cutting strategy, i.e. the sizes of the plurality of second images obtained by cutting may not be uniform. Yet another processing policy may be, for example, a priority policy, such as a policy formulated based on user information in the first image. For example, the first image can be cut according to whether the user exists in the image, so that the region where the user is located is avoided being cut as much as possible, and the integrity of the user is ensured. The cutting strategy can be determined according to the importance degree of the user, and the more important user can be preferentially considered to ensure the integrity of the important user during cutting.
According to an embodiment of the present disclosure, processing the first image according to the priority policy may specifically include: the first image is subjected to image recognition, if the recognition result represents that the first image comprises the user, the distribution information of the user in the first image can be further recognized, and the first image is cut based on the distribution information of the user to obtain a plurality of second images, so that the integrity of the user in the second images meets the preset condition, and the cutting of the area where the user is located is avoided as much as possible when the first image is cut.
Further, when the first image is subjected to image recognition to obtain the distribution information of the user, the area where the target user is located can be further recognized. The target user may be, for example, a user with high importance. When the first image is cut based on the distribution information of the user to obtain a plurality of second images, the integrity of the target user can be preferentially ensured. Or, all users in the first image can be identified, the priority of each user in all users is determined, and all users are arranged according to the priority to obtain the arrangement result. Then, when the first image is cut, the integrity of the user with a high priority can be preferentially ensured according to the ranking result. It can be understood that the first image is cut based on the priority policy, so that the region where the user is located is prevented from being cut as much as possible, and the integrity of the user is guaranteed.
In operation S230, at least one of the plurality of second images is processed according to the deformation information of the specific object of each of the plurality of second images, resulting in at least one processed second image.
According to the embodiment of the disclosure, after the plurality of second images are obtained by cutting, part of the plurality of second images can be processed. For example, each second image includes a specific object, and the deformation information of the specific object represents, for example, a degree of bending or distortion of the specific object. For example, if the second image to be processed is a table, if the line of the table edge is bent due to distortion, the second image may be processed according to the bending degree of the table edge, so that the deformation amount of the specific object in the processed second image is reduced, for example, the bending degree of the line of the table edge is reduced or the line is changed, that is, the distortion degree of the processed second image is reduced.
In operation S240, a third image is obtained based on the at least one processed second image and the plurality of second images. For example, a third image is derived based on the at least one processed second image and the at least one unprocessed second image. Wherein the amount of deformation of the specific object of the third image is smaller than the amount of deformation of the specific object of the first image.
according to an embodiment of the present disclosure, the plurality of second images includes, for example, image 1, image 2, image 3, image 4. Wherein, the image 1 and the image 2 are processed, and the image 3 and the image 4 are not processed, the processed image 1 and the image 2, and the unprocessed image 3 and the image 4 may be subjected to stitching processing to obtain a third image. Wherein the amount of deformation of the specific object in the third image is reduced, i.e. the degree of distortion of the third image is reduced.
according to the embodiment of the disclosure, the distorted image is divided into the plurality of parts, and each part is processed respectively, and the processed images are spliced to form a new image, so that the distortion degree of the spliced image is reduced, the watching experience of a user is not influenced when the spliced image is displayed to the user for watching, and the watching experience of the user is improved. In addition, when each cutting image is processed, the image is automatically processed according to the deformation information of the specific object in each cutting image, and the processing effect of the image is improved. In addition, the image is cut into a plurality of parts and then processed respectively, so that the processing precision is improved conveniently.
According to an embodiment of the present disclosure, the operation S230 includes the following steps (1) to (3), for example.
(1) And determining at least one second image needing to be processed according to the deformation information of the specific object of each second image in the plurality of second images.
In accordance with an implementation of the present disclosure, for example, an amount of deformation of a particular object of a current second image of the plurality of second images is determined, and in response to determining that the amount of deformation is greater than a particular threshold, the current second image is determined to be at least one second image that requires processing.
For example, the plurality of second images includes image 1, image 2, image 3, and image 4. Wherein, the deformation amount of the specific object in the image 1, the image 2, the image 3 and the image 4 is respectively determined, if the deformation amount is larger than a specific threshold value, the image with the deformation amount larger than the specific threshold value is determined to be processed. For example, if the amount of deformation of a particular object in image 1 and image 2 is greater than a particular threshold, then image 1 and image 2 are processed. If the amount of deformation of the specific object in the images 3 and 4 is equal to or less than the specific threshold, the images 3 and 4 may not be processed. Wherein the deformation amount is larger than a certain threshold value may be, for example, a distortion rate larger than a certain distortion rate, which may be, for example, 12%.
(2) and respectively determining a processing mode aiming at each second image in the at least one second image according to the deformation information of the specific object of each second image in the at least one second image. For example, since the deformation information of the specific object in different second images is different, the corresponding processing manner may be determined according to the deformation information of each second image.
(3) And processing each second image in the at least one second image respectively based on the determined processing mode. I.e. the corresponding second image is processed according to the determined processing means.
Wherein, the processing mode comprises at least one of the following modes: stretching processing, rotating processing, cutting processing, supplementing processing and adjusting the display angle of the current second image. It will be appreciated that the same second image may be processed using a plurality of processing methods, for example, for image 1, the image 1 may be stretched first and then cropped. In other words, regardless of the processing method used, the disclosed embodiments aim to reduce the degree of distortion of the second image. For example, when the second image is distorted to cause the table edge line to bend, the second image is processed by a plurality of processing methods, and the degree of bending of the table edge line in the obtained processed second image is reduced or becomes a straight line.
according to an embodiment of the present disclosure, the operation S210 includes, for example: and acquiring a fourth image, and performing selection processing on the fourth image to obtain a first image. Wherein the first image comprises an annular partial image of the fourth image. The fourth image may be, for example, an initial image obtained by a panoramic effect camera, and the first image is obtained by cutting an area with a large distortion degree in the fourth image.
fig. 3 and 4A-4C schematically illustrate a schematic diagram of an image processing method according to an embodiment of the present disclosure.
for example, a panoramic effect camera may acquire an image of a stereoscopic space (which may be a spherical space, for example) as shown in the left diagram of fig. 3. In the acquired fourth image, the degree of distortion of the corresponding images of the upper and lower portions of the spherical space is large. Therefore, the embodiment of the present disclosure may remove the images corresponding to the upper and lower part spaces of the spherical space in the fourth image, and the obtained first image may be, for example, an image corresponding to the middle part space of the spherical space.
for example, processing the fourth image to obtain the first image includes, for example: and determining a selection angle, and performing selection processing on the fourth image based on the selection angle to obtain a first image. The selection angle may be used to select an image corresponding to a middle space in a spherical space, for example. As shown in fig. 3, the selection angle is, for example, a, and the selection angle a includes, for example, angle size and angle direction information. The angular direction information may for example represent the phase position of the bisector of the selection angle a with respect to a particular axis P of the spherical space. When the panoramic effect camera acquires the panoramic image of the spherical space, the distortion degree of the image part of the panoramic image which is farther from the center of the sphere along the specific axis P is larger, that is, the distortion degree of the images corresponding to the upper part space and the lower part space of the spherical space shown in fig. 3 is larger than that of the image corresponding to the middle part space.
For example, the selection angle a may be specifically set according to the actual application, and the present disclosure is exemplified by the selection angle a being 70 degrees and the bisector of the angle a being perpendicular to the specific axis P. The angles of the upper and lower parts corresponding to the removed image are, for example, 55 degrees and 55 degrees, respectively. By this selection angle a, an image corresponding to a middle space of the spherical space in the fourth image may be selected as the first image, wherein the middle space is, for example, as shown in the right diagram of fig. 3.
According to the embodiment of the disclosure, when the fourth image of the spherical space is acquired through the panoramic effect camera, it can be understood that the fourth image is a planar image during imaging. However, due to diversification of the apparatus structure of the panoramic effect camera or the photosensitive device, the fourth image may be a different planar image. The fourth image may be, for example, a square image or a circular image, etc. The embodiments of the present disclosure may obtain the first image by processing the fourth image, and the first image may include, for example, an image of a ring portion of the fourth image.
As shown in fig. 4A, for example, when the fourth image is acquired by the panoramic effect camera, the fourth image may be generated by the first imaging sensor. The fourth image generated by the first imaging sensor is, for example, a square image. The fourth image is processed according to the selection angle a, and the resulting first image is, for example, an annular partial image in the fourth image, for example, a shaded partial image as shown in fig. 4A. The annular partial image corresponds to an image in the middle partial space corresponding to the selection angle a in the spherical space. Wherein a line segment connecting any point from the middle point to the edge of the fourth image passes through the annular partial image, the line segment comprises a plurality of parts, and the length of each part is a1、b1、c1Wherein a is1、b1、c1is equal to the ratio of the angle A, B, C, e.g., a1∶b1∶c1=A∶B∶C。
as shown in fig. 4B, for example, when the fourth image is acquired by the panoramic effect camera, the fourth image may be generated by the second imaging sensor. The fourth image generated by the second imaging sensor is, for example, a circular image (or an elliptical image). The first image is a shaded portion image as shown in fig. 4B. The annular partial image corresponds to an image in the middle partial space corresponding to the selection angle a in the spherical space. Wherein a line segment connecting any point from the middle point to the edge of the fourth image passes through the annular partial image, the line segment comprises a plurality of parts, and the length of each part is a2、b2、c2E.g. a2∶b2∶c2=A∶B∶C。
It can be understood that the image distortion is severe near corners because of the existence of corners in the square image, while the image distortion is not caused by angles in the circular image. Therefore, the circular image generated by the second imaging sensor is less distorted than the square image generated by the first imaging sensor. Thus, the second imaging sensor may be selected when acquiring the fourth image. However, when the second imaging sensor is expensive or other reasons are taken into consideration, and it is necessary to generate an image using the first imaging sensor, in order to avoid distortion due to a square image as much as possible, a fourth image may be generated using the first imaging sensor in a manner as shown in fig. 4C.
as shown in fig. 4C, for example, a fourth image may be generated by the first imaging sensor, the fourth image being, for example, an inscribed circle partial image of a square image (e.g., a square), and the first image being a shaded partial image as shown in fig. 4C. The annular partial image corresponds to an image in the middle partial space corresponding to the selection angle a in the spherical space. Wherein a line segment connecting the middle point of the fourth image to any point of the edge of the inscribed circle passes through the annular partial image, the line segment comprises a plurality of parts, and the length of each part is a3、b3、c3E.g. a3∶b3∶c3=A∶B∶C。
It is to be appreciated that to reduce imaging costs, the fourth image may be generated using the first imaging sensor. In the process of generating the fourth image by using the first imaging sensor, although the utilization rate of imaging is low (for example, the size of an image for generating an inscribed circle is smaller than that of a square image directly generated), the distortion degree of the image for generating the inscribed circle by using the first imaging sensor is smaller than that of the square image directly generated. Therefore, by using the first imaging sensor to generate the inscribed circle image, not only the imaging cost is reduced, but also the distortion degree of the image is reduced.
According to the embodiment of the disclosure, after the fourth image is acquired, the part with the larger distortion degree of the fourth image is removed, the acquired first image is cut into the plurality of second images, and the plurality of second images are processed respectively, so that the image processing effect is improved.
Fig. 5 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the image processing apparatus 500 includes a first acquisition module 510, a first processing module 520, a second processing module 530, and a second acquisition module 540.
The first acquisition module 510 may be used to acquire a first image. According to an embodiment of the present disclosure, the first obtaining module 510 may perform, for example, the operation S210 described above with reference to fig. 2, which is not described herein again.
The first processing module 520 may be configured to process the first image to obtain a plurality of second images. According to the embodiment of the present disclosure, the first processing module 520 may, for example, perform operation S220 described above with reference to fig. 2, which is not described herein again.
The second processing module 530 may be configured to process at least one of the plurality of second images according to the deformation information of the specific object of each of the plurality of second images, so as to obtain at least one processed second image. According to the embodiment of the present disclosure, the second processing module 530 may, for example, perform operation S230 described above with reference to fig. 2, which is not described herein again.
The second obtaining module 540 may be configured to obtain a third image based on the at least one processed second image and the plurality of second images, where a deformation amount of the specific object of the third image is smaller than a deformation amount of the specific object of the first image. According to the embodiment of the present disclosure, the second obtaining module 540 may, for example, perform the operation S240 described above with reference to fig. 2, which is not described herein again.
According to an embodiment of the present disclosure, processing at least one of the plurality of second images according to the deformation information of the specific object of each of the plurality of second images includes: determining at least one second image needing to be processed according to the deformation information of the specific object of each second image in the plurality of second images, respectively determining a processing mode aiming at each second image in the at least one second image according to the deformation information of the specific object of each second image in the at least one second image, and respectively processing each second image in the at least one second image based on the determined processing mode.
According to an embodiment of the present disclosure, determining at least one second image that needs to be processed according to deformation information of the specific object of each of the plurality of second images includes: and determining the deformation amount of the specific object of the current second image in the plurality of second images, and determining the current second image as at least one second image needing to be processed in response to determining that the deformation amount is larger than a specific threshold value.
According to an embodiment of the present disclosure, acquiring the first image includes: and acquiring a fourth image, and performing selection processing on the fourth image to obtain a first image, wherein the first image comprises an annular partial image of the fourth image.
According to an embodiment of the present disclosure, performing selection processing on the fourth image to obtain the first image includes: and determining a selection angle, and performing selection processing on the fourth image based on the selection angle to obtain a first image.
According to an embodiment of the present disclosure, processing the first image to obtain a plurality of second images includes: and processing the first image to obtain a plurality of second images according to a processing strategy, wherein the processing strategy comprises at least one of the following: the image processing method comprises an equal proportion cutting strategy and a priority strategy, wherein the priority strategy is a strategy made according to user information in a first image.
according to an embodiment of the present disclosure, the plurality of second images includes at least one processed second image and at least one unprocessed second image. Obtaining a third image based on the at least one processed second image and the plurality of second images comprises: a third image is derived based on the at least one processed second image and the at least one unprocessed second image.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first obtaining module 510, the first processing module 520, the second processing module 530 and the second obtaining module 540 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module 510, the first processing module 520, the second processing module 530, and the second obtaining module 540 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, at least one of the first acquiring module 510, the first processing module 520, the second processing module 530 and the second acquiring module 540 may be at least partially implemented as a computer program module, which, when executed, may perform a corresponding function.
FIG. 6 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 6 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
as shown in fig. 6, a computer system 600 implementing image processing includes a processor 601, a computer-readable storage medium 602. The system 600 may perform a method according to an embodiment of the present disclosure.
In particular, processor 601 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 601 may also include onboard memory for caching purposes. The processor 601 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 602 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 602 may comprise a computer program 603, which computer program 603 may comprise code/computer-executable instructions that, when executed by the processor 601, cause the processor 601 to perform a method according to an embodiment of the disclosure or any variant thereof.
The computer program 603 may be configured with computer program code, for example comprising computer program modules. For example, in an example embodiment, code in computer program 603 may include one or more program modules, including 603A, modules 603B, … …, for example. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when the program modules are executed by the processor 601, the processor 601 may execute the method according to the embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present disclosure, at least one of the first obtaining module 510, the first processing module 520, the second processing module 530, and the second obtaining module 540 may be implemented as a computer program module described with reference to fig. 6, which, when executed by the processor 601, may implement the respective operations described above.
The present disclosure also provides a computer-readable medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The above-mentioned computer-readable medium carries one or more programs which, when executed, implement the above-mentioned image processing method.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
while the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An image processing method comprising:
Acquiring a first image;
Processing the first image to obtain a plurality of second images;
Processing at least one second image in the plurality of second images according to the deformation information of the specific object of each second image in the plurality of second images to obtain at least one processed second image; and
Obtaining a third image based on the at least one processed second image and the plurality of second images,
Wherein the amount of deformation of the specific object of the third image is smaller than the amount of deformation of the specific object of the first image.
2. the method of claim 1, wherein the processing at least one of the plurality of second images according to deformation information of the specific object of each of the plurality of second images comprises:
Determining the at least one second image needing to be processed according to the deformation information of the specific object of each second image in the plurality of second images;
Respectively determining a processing mode aiming at each second image in the at least one second image according to the deformation information of the specific object of each second image in the at least one second image; and
And respectively processing each second image in the at least one second image based on the determined processing mode.
3. the method of claim 2, wherein the determining the at least one second image that needs to be processed according to the deformation information of the specific object of each of the plurality of second images comprises:
Determining an amount of deformation of a particular object of a current second image of the plurality of second images; and
In response to determining that the amount of deformation is greater than a particular threshold, determining the current second image to be the at least one second image that needs to be processed.
4. The method of claim 1, wherein the acquiring a first image comprises:
Acquiring a fourth image; and
And performing selection processing on the fourth image to obtain the first image, wherein the first image comprises an annular partial image of the fourth image.
5. The method of claim 4, wherein the selecting the fourth image to obtain the first image comprises:
Determining a selection angle; and
And carrying out selection processing on the fourth image based on the selection angle to obtain the first image.
6. the method of claim 1, wherein the processing the first image to obtain a plurality of second images comprises:
Processing the first image to obtain a plurality of second images according to a processing strategy;
Wherein the processing policy comprises at least one of:
an equal proportion cutting strategy; and
A priority policy, wherein the priority policy is a policy formulated according to user information in the first image.
7. the method of claim 1, wherein the plurality of second images includes the at least one processed second image and at least one unprocessed second image; obtaining a third image based on the at least one processed second image and the plurality of second images comprises:
obtaining the third image based on the at least one processed second image and the at least one unprocessed second image.
8. An image processing apparatus comprising:
The first acquisition module acquires a first image;
the first processing module is used for processing the first image to obtain a plurality of second images;
The second processing module is used for processing at least one second image in the plurality of second images according to the deformation information of the specific object of each second image in the plurality of second images to obtain at least one processed second image; and
A second obtaining module to obtain a third image based on the at least one processed second image and the plurality of second images,
Wherein the amount of deformation of the specific object of the third image is smaller than the amount of deformation of the specific object of the first image.
9. A computing device, comprising:
One or more processors; and
A memory for storing one or more programs,
Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. a computer-readable storage medium storing computer-executable instructions for implementing the method of any one of claims 1 to 7 when executed.
CN201910939346.0A 2019-09-30 2019-09-30 Image processing method, apparatus, computing device, and medium Pending CN110572578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910939346.0A CN110572578A (en) 2019-09-30 2019-09-30 Image processing method, apparatus, computing device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910939346.0A CN110572578A (en) 2019-09-30 2019-09-30 Image processing method, apparatus, computing device, and medium

Publications (1)

Publication Number Publication Date
CN110572578A true CN110572578A (en) 2019-12-13

Family

ID=68783419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910939346.0A Pending CN110572578A (en) 2019-09-30 2019-09-30 Image processing method, apparatus, computing device, and medium

Country Status (1)

Country Link
CN (1) CN110572578A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427252A (en) * 2013-09-03 2015-03-18 三星电子株式会社 Method for synthesizing images and electronic device thereof
CN105007410A (en) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 Large viewing angle camera control method and user terminal
CN105141827A (en) * 2015-06-30 2015-12-09 广东欧珀移动通信有限公司 Distortion correction method and terminal
CN107018316A (en) * 2015-12-22 2017-08-04 卡西欧计算机株式会社 Image processing apparatus, image processing method and program
US20170330311A1 (en) * 2014-12-04 2017-11-16 Mitsubishi Electric Corporation Image processing device and method, image capturing device, program, and record medium
CN108389159A (en) * 2015-06-30 2018-08-10 广东欧珀移动通信有限公司 A kind of distortion correction method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427252A (en) * 2013-09-03 2015-03-18 三星电子株式会社 Method for synthesizing images and electronic device thereof
US20170330311A1 (en) * 2014-12-04 2017-11-16 Mitsubishi Electric Corporation Image processing device and method, image capturing device, program, and record medium
CN105007410A (en) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 Large viewing angle camera control method and user terminal
CN105141827A (en) * 2015-06-30 2015-12-09 广东欧珀移动通信有限公司 Distortion correction method and terminal
CN108389159A (en) * 2015-06-30 2018-08-10 广东欧珀移动通信有限公司 A kind of distortion correction method and terminal
CN107018316A (en) * 2015-12-22 2017-08-04 卡西欧计算机株式会社 Image processing apparatus, image processing method and program

Similar Documents

Publication Publication Date Title
US9866752B2 (en) Systems and methods for producing a combined view from fisheye cameras
US20190311459A1 (en) Method and device for performing mapping on spherical panoramic image
CN107945112B (en) Panoramic image splicing method and device
US10957093B2 (en) Scene-based foveated rendering of graphics content
JP7316387B2 (en) Facial image processing method, device, readable medium and electronic apparatus
CN111405173B (en) Image acquisition method and device, point reading equipment, electronic equipment and storage medium
US20220092803A1 (en) Picture rendering method and apparatus, terminal and corresponding storage medium
US11683583B2 (en) Picture focusing method, apparatus, terminal, and corresponding storage medium
CN113255619B (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
US11562465B2 (en) Panoramic image stitching method and apparatus, terminal and corresponding storage medium
CN111766951A (en) Image display method and apparatus, computer system, and computer-readable storage medium
US20150316646A1 (en) Synthetic aperture radar target modeling
US20220198768A1 (en) Methods and apparatus to control appearance of views in free viewpoint media
WO2017023620A1 (en) Method and system to assist a user to capture an image or video
US10573277B2 (en) Display device, display system, and non-transitory recording medium, to adjust position of second image in accordance with adjusted zoom ratio of first image
US9613288B2 (en) Automatically identifying and healing spots in images
CN110572578A (en) Image processing method, apparatus, computing device, and medium
US20220086350A1 (en) Image Generation Method and Apparatus, Terminal and Corresponding Storage Medium
US20140267730A1 (en) Automotive camera vehicle integration
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
CN110140148B (en) Method and apparatus for multi-band blending of seams in images from multiple cameras
US9723216B2 (en) Method and system for generating an image including optically zoomed and digitally zoomed regions
US10620309B2 (en) Synthetic aperture radar target modeling
CN107665481B (en) Image processing method, system, processing equipment and electronic equipment
US10204397B2 (en) Bowtie view representing a 360-degree image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213