CN111292245A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111292245A
CN111292245A CN201811496667.XA CN201811496667A CN111292245A CN 111292245 A CN111292245 A CN 111292245A CN 201811496667 A CN201811496667 A CN 201811496667A CN 111292245 A CN111292245 A CN 111292245A
Authority
CN
China
Prior art keywords
image
coordinate
pixel point
mapping
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811496667.XA
Other languages
Chinese (zh)
Inventor
李耔余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811496667.XA priority Critical patent/CN111292245A/en
Publication of CN111292245A publication Critical patent/CN111292245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring an image to be processed; acquiring a first polar coordinate of a pixel point of the image to be processed; acquiring a mapping parameter; calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter; and generating a processed image according to the second polar coordinate. According to the method and the device, the position of the pixel point after the deformation of the deformation image is calculated through the polar coordinate and the mapping parameter of the original image, and the technical problem that the nonlinear deformation in the prior art is complex to realize is solved.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. Various special effects such as beauty, filters, large eyes and thin face, etc. can be formed by combining various basic image processes.
The existing image special effects are generally processed by using special effect resources, such as a filter effect or a buffing effect, the special effects corresponding to deformation are generally to perform rotation translation on the image, and for some special effects, because the deformation is not linear, if the special effects are difficult to be realized only through rotation translation, the problem to be solved urgently is solved.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring an image to be processed; acquiring a first polar coordinate of a pixel point of the image to be processed; acquiring a mapping parameter; calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter; and generating a processed image according to the second polar coordinate.
Further, the acquiring the image to be processed includes: and acquiring a current image frame of the video as an image to be processed.
Further, the obtaining the first polar coordinate of the pixel point of the image to be processed includes: acquiring a first UV coordinate of the pixel point; converting the first UV coordinate to a first Cartesian coordinate; converting the first Cartesian coordinates to first polar coordinates.
Further, the obtaining of the mapping parameter includes: acquiring the type of mapping; and acquiring mapping parameters corresponding to the mapping types.
Further, the mapping type is equidistant mapping, and the mapping parameters are focal length and incidence angle.
Further, the calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter includes: calculating the offset of the pixel point according to the mapping parameter; and calculating a second polar coordinate of the pixel point according to the offset and the first polar coordinate.
Further, the calculating the offset of the pixel point according to the mapping parameter includes: acquiring a mapping function; and substituting the mapping parameters into a mapping function to calculate the offset of the pixel point.
Further, the generating a processed image according to the second polar coordinate includes: and assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image.
Further, assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image, including: converting the second polar coordinate to a second UV coordinate; and assigning the first attribute value of the pixel point to the pixel point at the position of the second UV coordinate to generate a processed image.
Further, the converting the second polar coordinate to a second UV coordinate includes: converting the second coordinates to second cartesian coordinates; and converting the second Cartesian coordinate normalization into a second UV coordinate.
Further, the obtaining the first polar coordinate of the pixel point of the image to be processed includes: and acquiring a first polar coordinate of any pixel point of the image to be processed.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the image acquisition module is used for acquiring an image to be processed;
the first coordinate acquisition module is used for acquiring a first polar coordinate of a pixel point of the image to be processed;
the mapping parameter acquisition module is used for acquiring mapping parameters;
the second coordinate calculation module is used for calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter;
and the image processing module is used for generating a processed image according to the second polar coordinate.
Further, the image obtaining module is further configured to: and acquiring a current image frame of the video as an image to be processed.
Further, the first coordinate obtaining module further includes:
the first UV coordinate acquisition module is used for acquiring a first UV coordinate of the pixel point;
the first conversion module is used for converting the first UV coordinate into a first Cartesian coordinate;
and the second conversion module is used for converting the first Cartesian coordinate into a first polar coordinate.
Further, the mapping parameter obtaining module further includes:
the type acquisition module is used for acquiring the mapping type;
and the mapping parameter obtaining submodule is used for obtaining the mapping parameters corresponding to the mapping types.
Further, the mapping type is equidistant mapping, and the mapping parameters are focal length and incidence angle.
Further, the second coordinate calculation module further includes:
the offset calculation module is used for calculating the offset of the pixel point according to the mapping parameter;
and the second coordinate calculation submodule is used for calculating the second polar coordinate of the pixel point according to the offset and the first polar coordinate.
Further, the offset calculation module includes:
the mapping function acquisition module is used for acquiring a mapping function;
and the offset calculation submodule is used for substituting the mapping parameter into a mapping function to calculate the offset of the pixel point.
Further, the image processing module further includes:
and the assignment module is used for assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image.
Further, the assignment module further includes:
the third coordinate conversion module is used for converting the second polar coordinate into a second UV coordinate;
and the assignment submodule is used for assigning the first attribute value of the pixel point to the pixel point at the position of the second UV coordinate to generate a processed image.
Further, the third coordinate conversion module further includes:
the fourth coordinate conversion module is used for converting the second coordinate into a second Cartesian coordinate;
and the fifth coordinate conversion module is used for converting the second Cartesian coordinates into second UV coordinates in a normalized mode.
Further, the first coordinate obtaining module is further configured to:
and acquiring a first polar coordinate of any pixel point of the image to be processed.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute the image processing method according to any one of the foregoing first aspects.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring an image to be processed; acquiring a first polar coordinate of a pixel point of the image to be processed; acquiring a mapping parameter; calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter; and generating a processed image according to the second polar coordinate. According to the method and the device, the position of the pixel point after the deformation of the deformation image is calculated through the polar coordinate and the mapping parameter of the original image, and the technical problem that the nonlinear deformation in the prior art is complex to realize is solved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of an embodiment of an image processing method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of isometric mapping in an image processing method provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of radial offset in an image processing method provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of an image processing method provided in this embodiment of the present disclosure, where the image processing method provided in this embodiment may be executed by an image processing apparatus, the image processing apparatus may be implemented as software, or implemented as a combination of software and hardware, and the image processing apparatus may be integrated in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring an image to be processed;
in one embodiment, the acquisition of the image to be processed may be obtained by an image sensor, which refers to various devices that can capture images, typical image sensors being video cameras, still cameras, etc. In this embodiment, the image sensor may be a camera on the terminal device, such as a front-facing or rear-facing camera on a smart phone, and an image acquired by the camera may be directly displayed on a display screen of the smart phone.
In this embodiment, the acquiring of the to-be-processed image may be acquiring a current image frame of a video currently captured by the terminal device, and since the video is composed of a plurality of image frames, the processing of the image in this embodiment may be processing the image frame of the video.
Step S102: acquiring a first polar coordinate of a pixel point of the image to be processed;
in one embodiment, the method for obtaining the first polar coordinates of the pixel points of the image to be processed comprises obtaining a first UV coordinate of the pixel points, converting the first UV coordinate into a first Cartesian coordinate, converting the first Cartesian coordinate into a first polar coordinate, wherein the Cartesian coordinate is a coordinate used by the texture image, setting the UV coordinate as (u, v), and the texture map as a map of P x q, wherein P is the pixel number of the texture map in the horizontal direction, q is the pixel number of the texture map in the vertical direction, and the origin of the Cartesian coordinate system of the texture map coincides with the origin of the UV coordinate system of the UV image, and the Cartesian coordinate (x, y) of any point on the texture map is (u, y) P, v q, and the polar coordinate (ρ α) thereof is (P, v, q), and the Cartesian origin of the texture map is the origin of the UV coordinate system of the UV image, and the Cartesian origin of the texture map is (P, v) of the texture map
Figure BDA0001897056540000071
The following is a specific example to illustrate the transformation process of the coordinates: the value range of the UV coordinate is [0,1 ]]Setting a texture map as a 1280 x 720 image, and setting pixel points of the image to be processed atThe coordinates of the UV image are (0.5 ), and the cartesian coordinates on the texture map are calculated to be (0.5 x 1280,0.5 x 720) ═ 640,360, and the polar coordinates of the point are calculated to be (640,360)
Figure BDA0001897056540000072
Figure BDA0001897056540000073
The coordinate transformation in the above example is based on the coincidence of the origins of the three coordinate systems, and if the origins of the UV coordinate system and the Cartesian coordinate system do not coincide, further processing of the above angle α is required, where x is the x when x is the x<At the time of 0, the number of the first,
Figure BDA0001897056540000081
when x is>At the time of 0, the number of the first,
Figure BDA0001897056540000082
in one embodiment, the obtaining the first polar coordinates of the pixel points of the image to be processed includes: and acquiring a first polar coordinate of any pixel point of the image to be processed. That is to say, the image processing method of the present disclosure may select to process the entire image or select to process a part of the image for any one pixel point in the image.
Step S103: acquiring a mapping parameter;
in one embodiment, a parameter required for pixel mapping is obtained, and the parameter may be a parameter defining a degree of the pixel mapping, such as a degree of offset.
In one embodiment, the obtaining of the mapping parameter includes: acquiring the type of mapping; and acquiring mapping parameters corresponding to the mapping types. In this embodiment, a type of the mapping to be acquired is first selected, the type of the mapping may be a type defined by a user, each type of the mapping corresponds to a different mapping function, the mapping function is used to calculate a position of a pixel after deformation, and at this time, the mapping parameter is a parameter of the mapping function and is used to define a form of the function itself. Optionally, the mapping type is an equidistant mapping, the mapping parameters are focal lengths and incident angles of the equidistant mapping, the meaning of the equidistant mapping is shown in fig. 2, where 201 is a mapping model, in this embodiment, the mapping model is a spherical lens, where 202 is an original image, 203 is a mapped image, a pixel point a of the original image has a mapping point B on the mapping model, BO is an incident ray of a point B, θ is an incident angle, a point G is a pixel point after the point a is shifted by the mapping model, C is an origin of the original image, E is an origin of the mapped image, GE is d in length, a focal length of the mapping model is OE ═ f, and d ═ f ═ θ, that is, a distance from the mapped pixel to the origin is in a positive proportional relationship with the incident angle. It is to be understood that the above mapping manners are only examples, and practically any mapping manner may be applied to the present disclosure, for example, the function of the orthogonal mapping model is d ═ f × s n (θ), and the mapping parameters required by different mapping manners may be different, and are not described herein again. Step S104: calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter;
in one embodiment, the second polar coordinates of the pixel points are calculated according to the angle in the first polar coordinates and the mapping parameter.
In one embodiment, calculating the offset of the pixel point according to the mapping parameter, calculating the second polar coordinate of the pixel point according to the offset and the first polar coordinate, and calculating the offset of the pixel point according to the mapping parameter includes obtaining an offset function; substituting the mapping parameters into an offset function to calculate the offset of the pixel point: in this embodiment, the offset may be an offset of a pixel in the radial direction of the lens in step S103, as shown in fig. 3, which is a bottom surface map of the lens corresponding to the bottom surface map of the lens in fig. 2, where D is a projection point of a point B in fig. 2 on the bottom surface, G 'is a projection point of a point G on the bottom surface, where OG' is an offset of the pixel in response to the origin of coordinates, and an example in step S103 may calculate OG D f θ, where f is a constant, and θ is also a constantD is calculated, in the embodiment, the pixel point only generates radial deviation, so the angle α in the first polar coordinate does not change, the second polar coordinate is (f × θ, α) can be obtained, or the θ is not constant, the radius R of the circular surface of the lens as the mapping model is constant, at this time, the θ can be calculated through the radius R of the circular surface, the specific calculation process is as follows, referring to fig. 2, since the point B is the projection of the point A on the circular surface, the coordinate of the point B on the XY coordinate axis of the point is the same as the point A, namely (x, y), the coordinate of the point B on the Z axis is set as Z, and the point B is the point on the circular surface, therefore the R is the same as the point A, the coordinate of the2=x2+y2+z2Can obtain
Figure BDA0001897056540000091
And BD ═ z, and
Figure BDA0001897056540000092
Figure BDA0001897056540000093
then
Figure BDA0001897056540000094
Second polar coordinate of
Figure BDA0001897056540000095
It should be understood that the above calculation method of the second polar coordinate is only an example, and does not limit the present disclosure, and other offsets may be actually added to the pixel point, for example, in addition to the radial offset, a radial offset may also be added, at this time, only one radial offset angle or a calculation function of the radial offset angle needs to be defined, and the second polar coordinate may be calculated by adding the offset angle on the basis of α, and in actual use, the offset in any direction and the calculation function of the offset may be defined as needed to form various processing effects, which is not described herein again.
Step S105: and generating a processed image according to the second polar coordinate.
In one embodiment, the generating a processed image according to the second polar coordinates includes: and assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image. In this embodiment, a first attribute value of a pixel point on the image to be processed is directly obtained, where the first attribute value may typically be a value of three-channel color components in an RGB color space, and the color value is assigned to the pixel point at the position of the second polar coordinate, so as to generate a processed image. In an embodiment, there may be no pixel or multiple pixels covered at the position of the second polar coordinate, for example, in a 1280 × 720 image, the number of pixels is 1280 × 720, and the coordinates are so many, but the calculated coordinate is not necessarily exactly located at the position of the pixel, and at this time, the pixel closest to the second polar coordinate may be calculated, and the position of the pixel is used as the assignment object of the color value. It is to be understood that the first attribute value may also be other attributes, and optionally, the attribute may be an attribute of a pixel point such as brightness, saturation, hue, and the like, which is not described herein again.
In an embodiment, the assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image includes: and converting the second polar coordinate into a second UV coordinate, assigning the first attribute value of the pixel point to the pixel point at the position of the second UV coordinate, and generating a processed image. In this embodiment, the converting the second polar coordinate into the second UV coordinate may include: converting the second coordinates to second cartesian coordinates; and converting the second Cartesian coordinate normalization into a second UV coordinate. Generally, a texture map needs to be rendered through a UV map, so after a second polar coordinate is obtained, the second polar coordinate can be converted into a UV coordinate, the conversion method is opposite to the method for converting the UV coordinate into the polar coordinate, a cartesian coordinate is calculated through the polar coordinate, and then the cartesian coordinate is converted into the UV coordinate, so that a processed UV image of the original UV image can be obtained, and at this time, the processed texture image can be directly rendered by using the processed UV image, so that the processed image can be obtained.
In an embodiment, the assignment process may further perform other transformations to form more effects, and optionally, the color values of the pixel points of the first polar coordinate and the color values of the template map are mixed in a certain proportion, so that a transformation mixed effect may be obtained. Typically, the pixel points of the first polar coordinate are mapped to the color chart in a ratio of 1: and (2) mixing the colors according to the proportion of 1, assigning the mixed colors to pixel points at the position of the second polar coordinate, so that a layer of filter is added to the deformed image, and a better image processing effect can be obtained.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring an image to be processed; acquiring a first polar coordinate of a pixel point of the image to be processed; acquiring a mapping parameter; calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter; and generating a processed image according to the second polar coordinate. According to the method and the device, the position of the pixel point after the deformation of the deformation image is calculated through the polar coordinate and the mapping parameter of the original image, and the technical problem that the nonlinear deformation in the prior art is complex to realize is solved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
Fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus 400 includes: an image acquisition module 401, a first coordinate acquisition module 402, a mapping parameter acquisition module 403, a second coordinate calculation module 404, and an image processing module 405. Wherein the content of the first and second substances,
an image obtaining module 401, configured to obtain an image to be processed;
a first coordinate obtaining module 402, configured to obtain a first polar coordinate of a pixel point of the image to be processed;
a mapping parameter obtaining module 403, configured to obtain a mapping parameter;
a second coordinate calculation module 404, configured to calculate a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter;
and an image processing module 405, configured to generate a processed image according to the second polar coordinate.
Further, the image obtaining module 401 is further configured to: and acquiring a current image frame of the video as an image to be processed.
Further, the first coordinate obtaining module 402 further includes:
the first UV coordinate acquisition module is used for acquiring a first UV coordinate of the pixel point;
the first conversion module is used for converting the first UV coordinate into a first Cartesian coordinate;
and the second conversion module is used for converting the first Cartesian coordinate into a first polar coordinate.
Further, the mapping parameter obtaining module 403 further includes:
the type acquisition module is used for acquiring the mapping type;
and the mapping parameter obtaining submodule is used for obtaining the mapping parameters corresponding to the mapping types.
Further, the mapping type is equidistant mapping, and the mapping parameters are focal length and incidence angle.
Further, the second coordinate calculation module 404 further includes:
the offset calculation module is used for calculating the offset of the pixel point according to the mapping parameter;
and the second coordinate calculation submodule is used for calculating the second polar coordinate of the pixel point according to the offset and the first polar coordinate.
Further, the offset calculation module includes:
the mapping function acquisition module is used for acquiring a mapping function;
and the offset calculation submodule is used for substituting the mapping parameter into a mapping function to calculate the offset of the pixel point.
Further, the image processing module 405 further includes:
and the assignment module is used for assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image.
Further, the assignment module further includes:
the third coordinate conversion module is used for converting the second polar coordinate into a second UV coordinate;
and the assignment submodule is used for assigning the first attribute value of the pixel point to the pixel point at the position of the second UV coordinate to generate a processed image.
Further, the third coordinate conversion module further includes:
the fourth coordinate conversion module is used for converting the second coordinate into a second Cartesian coordinate;
and the fifth coordinate conversion module is used for converting the second Cartesian coordinates into second UV coordinates in a normalized mode.
Further, the first coordinate obtaining module 402 is further configured to:
and acquiring a first polar coordinate of any pixel point of the image to be processed.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (14)

1. An image processing method, comprising:
acquiring an image to be processed;
acquiring a first polar coordinate of a pixel point of the image to be processed;
acquiring a mapping parameter;
calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter;
and generating a processed image according to the second polar coordinate.
2. The image processing method of claim 1, wherein the acquiring the image to be processed comprises:
and acquiring a current image frame of the video as an image to be processed.
3. The image processing method according to claim 1, wherein the obtaining of the first polar coordinates of the pixel points of the image to be processed comprises:
acquiring a first UV coordinate of the pixel point;
converting the first UV coordinate to a first Cartesian coordinate;
converting the first Cartesian coordinates to first polar coordinates.
4. The image processing method of claim 1, wherein the obtaining the mapping parameters comprises:
acquiring the type of mapping;
and acquiring mapping parameters corresponding to the mapping types.
5. The image processing method of claim 1, wherein the mapping type is an isometric mapping and the mapping parameters are focal length and angle of incidence.
6. The image processing method of claim 1, wherein said calculating a second polar coordinate of the pixel point according to the first polar coordinate and a mapping parameter comprises:
calculating the offset of the pixel point according to the mapping parameter;
and calculating a second polar coordinate of the pixel point according to the offset and the first polar coordinate.
7. The image processing method of claim 6, wherein said calculating an offset of said pixel point according to said mapping parameter comprises:
acquiring a mapping function;
and substituting the mapping parameters into a mapping function to calculate the offset of the pixel point.
8. The image processing method of claim 1, wherein said generating a processed image from said second polar coordinates comprises:
and assigning the first attribute value of the pixel point to the pixel point at the position of the second polar coordinate to generate a processed image.
9. The image processing method according to claim 8, wherein said assigning the first attribute value of the pixel to the pixel at the position of the second polar coordinate, and generating the processed image comprises:
converting the second polar coordinate to a second UV coordinate;
and assigning the first attribute value of the pixel point to the pixel point at the position of the second UV coordinate to generate a processed image.
10. The image processing method of claim 9, wherein said converting the second polar coordinate to a second UV coordinate comprises:
converting the second coordinates to second cartesian coordinates;
and converting the second Cartesian coordinate normalization into a second UV coordinate.
11. The image processing method according to claim 1, wherein the obtaining of the first polar coordinates of the pixel points of the image to be processed comprises:
and acquiring a first polar coordinate of any pixel point of the image to be processed.
12. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be processed;
the first coordinate acquisition module is used for acquiring a first polar coordinate of a pixel point of the image to be processed;
the mapping parameter acquisition module is used for acquiring mapping parameters;
the second coordinate calculation module is used for calculating a second polar coordinate of the pixel point according to the first polar coordinate and the mapping parameter;
and the image processing module is used for generating a processed image according to the second polar coordinate.
13. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the image processing method according to any of claims 1-11.
14. A computer-readable storage medium storing non-transitory computer-readable instructions that, when executed by a computer, cause the computer to perform the image processing method of any one of claims 1-11.
CN201811496667.XA 2018-12-07 2018-12-07 Image processing method and device Pending CN111292245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811496667.XA CN111292245A (en) 2018-12-07 2018-12-07 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811496667.XA CN111292245A (en) 2018-12-07 2018-12-07 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111292245A true CN111292245A (en) 2020-06-16

Family

ID=71017143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811496667.XA Pending CN111292245A (en) 2018-12-07 2018-12-07 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111292245A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063299A (en) * 2022-08-19 2022-09-16 北京睿芯高通量科技有限公司 Image preprocessing method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04211584A (en) * 1990-01-11 1992-08-03 Grass Valley Group Inc:The Mapping function circuit
JP2006138735A (en) * 2004-11-12 2006-06-01 Central Res Inst Of Electric Power Ind Method, device and program of detecting camera angle change, and method of image processing, facility monitoring, surveying and setting stereoscopic camera using same
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN103035015A (en) * 2012-11-28 2013-04-10 无锡羿飞科技有限公司 Processing method of projector spherical display output image
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN106165387A (en) * 2013-11-22 2016-11-23 维迪诺蒂有限公司 Light field processing method
CN106558017A (en) * 2015-09-25 2017-04-05 无锡羿飞科技有限公司 Spherical display image processing method and system
CN107169926A (en) * 2017-04-26 2017-09-15 腾讯科技(深圳)有限公司 Image processing method and device
CN108600576A (en) * 2013-08-28 2018-09-28 株式会社理光 Image processing apparatus, method and system and computer readable recording medium storing program for performing
CN108830787A (en) * 2018-06-20 2018-11-16 北京微播视界科技有限公司 The method, apparatus and electronic equipment of anamorphose

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04211584A (en) * 1990-01-11 1992-08-03 Grass Valley Group Inc:The Mapping function circuit
JP2006138735A (en) * 2004-11-12 2006-06-01 Central Res Inst Of Electric Power Ind Method, device and program of detecting camera angle change, and method of image processing, facility monitoring, surveying and setting stereoscopic camera using same
CN102216941A (en) * 2008-08-19 2011-10-12 数字标记公司 Methods and systems for content processing
CN103035015A (en) * 2012-11-28 2013-04-10 无锡羿飞科技有限公司 Processing method of projector spherical display output image
CN108600576A (en) * 2013-08-28 2018-09-28 株式会社理光 Image processing apparatus, method and system and computer readable recording medium storing program for performing
CN106165387A (en) * 2013-11-22 2016-11-23 维迪诺蒂有限公司 Light field processing method
CN106558017A (en) * 2015-09-25 2017-04-05 无锡羿飞科技有限公司 Spherical display image processing method and system
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN107169926A (en) * 2017-04-26 2017-09-15 腾讯科技(深圳)有限公司 Image processing method and device
CN108830787A (en) * 2018-06-20 2018-11-16 北京微播视界科技有限公司 The method, apparatus and electronic equipment of anamorphose

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘煜等: "阵列相机成像技术与应用", 30 April 2018, 国防科技大学出版社, pages: 19 *
陈海波;何国瑜;: "一种完全映射近场成像方法", 微波学报, no. 1 *
陶菲 等: "偏振成像仪几何定标数据处理及软件设计", vol. 54, no. 54, pages 091005 - 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063299A (en) * 2022-08-19 2022-09-16 北京睿芯高通量科技有限公司 Image preprocessing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110070496B (en) Method and device for generating image special effect and hardware device
CN110070495B (en) Image processing method and device and electronic equipment
CN110288551B (en) Video beautifying method and device and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110070515B (en) Image synthesis method, apparatus and computer-readable storage medium
CN113126937A (en) Display terminal adjusting method and display terminal
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN110070586B (en) Color card generation method and device and electronic equipment
CN110264430B (en) Video beautifying method and device and electronic equipment
CN110070482B (en) Image processing method, apparatus and computer readable storage medium
CN111292245A (en) Image processing method and device
CN111292247A (en) Image processing method and device
CN110070617B (en) Data synchronization method, device and hardware device
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN111223105B (en) Image processing method and device
CN111292227A (en) Image processing method and device
CN111292276B (en) Image processing method and device
CN111221444A (en) Split screen special effect processing method and device, electronic equipment and storage medium
CN111200705B (en) Image processing method and device
CN116385469A (en) Special effect image generation method and device, electronic equipment and storage medium
CN111415393B (en) Method and device for adjusting display of multimedia blackboard, medium and electronic equipment
CN110807114B (en) Method, device, terminal and storage medium for picture display
CN111199519B (en) Method and device for generating special effect package
CN113066166A (en) Image processing method and device and electronic equipment
CN111353929A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination