CN107025638B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107025638B
CN107025638B CN201710193987.7A CN201710193987A CN107025638B CN 107025638 B CN107025638 B CN 107025638B CN 201710193987 A CN201710193987 A CN 201710193987A CN 107025638 B CN107025638 B CN 107025638B
Authority
CN
China
Prior art keywords
image
processing
pixel points
coordinates
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710193987.7A
Other languages
Chinese (zh)
Other versions
CN107025638A (en
Inventor
李飞云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710193987.7A priority Critical patent/CN107025638B/en
Publication of CN107025638A publication Critical patent/CN107025638A/en
Application granted granted Critical
Publication of CN107025638B publication Critical patent/CN107025638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, which are used to reduce distortion of a watermark in an image during image processing to improve the quality of the image. The method comprises the following steps: obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted; judging whether the coordinates of the pixel points belong to a known watermark region or not; when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged; and when the coordinates of the pixel points do not belong to a preset watermark region, carrying out distortion processing on the pixel points.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of communications and computer processing, and in particular, to a method and apparatus for image processing.
Background
With the development of electronic technology, devices such as mobile terminals and digital cameras have been widely used. People have higher and higher requirements on the photographic quality, and in order to meet the requirements of users, photographing equipment such as a mobile terminal and a digital camera can be provided with lenses with higher photographing performance such as a wide-angle lens. However, the picture taken by the wide-angle lens has distortion at the edge, which affects the picture quality. The image may be distorted to improve image quality. However, if the image contains a watermark, the watermark is deformed during the distortion processing, which adversely affects the image quality.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method and apparatus for image processing.
According to a first aspect of embodiments of the present disclosure, there is provided a method of image processing, including:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
and when the coordinates of the pixel points do not belong to a preset watermark region, carrying out distortion processing on the pixel points.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment excludes the watermark region when the image is subjected to distortion processing, and ensures that the watermark difference is not distorted in the distortion process, thereby improving the image quality and the display effect.
In one embodiment, before determining whether the coordinates of the pixel point belong to the known watermark region, the method further includes:
and determining a watermark area in the image to be distorted according to a preset watermark characteristic.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment provides an implementation mode for determining the watermark region according to the watermark characteristics, and the implementation mode is suitable for identifying and processing the foreign image.
In one embodiment, before obtaining a pixel point of the image to be distorted, the method further includes:
judging whether the obtained image needs to be subjected to distortion processing or not;
and when the acquired image needs to be subjected to distortion processing, determining the image as an image to be subjected to distortion processing.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment can identify whether the image is distorted or not, and carry out distortion processing on the image when the image is distorted, thereby improving the image quality and the display effect.
In one embodiment, the determining whether the image needs to be distorted includes at least one of:
judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and determining that the image needs to be subjected to distortion processing when the photographing equipment comprises the wide-angle lens;
and judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment judges whether the image is distorted or not through the type of the photographing equipment or the texture characteristics of the image, provides various judging modes, and can improve the accuracy of the judging result.
In one embodiment, before determining that the image is to be distorted, the method further comprises:
carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image;
or
Performing soft decoding on the image to obtain a luminance and chrominance YUV image;
and converting the YUV image into an RGB texture image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment converts the image into the RGB texture image, is convenient for distortion processing, and is beneficial to improving the image quality after the distortion processing.
In one embodiment, the distortion processing on the pixel point includes:
and processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment performs effective processing on the distortion characteristics, and improves the distortion processing effect.
In one embodiment, the processing the pixel point according to the width and height of the image and the distance from the pixel point to the central point of the image includes:
processing the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure BDA0001256915880000032
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the present embodiment provides an implementation of distortion handling.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for image processing, comprising:
the acquisition module is used for acquiring one pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
the first judgment module is used for judging whether the coordinates of the pixel points belong to a known watermark region;
the maintaining module is used for maintaining the coordinates of the pixel points unchanged when the coordinates of the pixel points belong to a preset watermark region;
and the distortion module is used for carrying out distortion processing on the pixel points when the coordinates of the pixel points do not belong to a preset watermark region.
In one embodiment, the apparatus further comprises:
and the area module is used for determining the watermark area in the image to be distorted according to the preset watermark characteristics.
In one embodiment, the apparatus further comprises:
the second judgment module is used for judging whether the acquired image needs to be subjected to distortion processing or not;
and the determining module is used for determining the image to be subjected to distortion processing when the obtained image is determined to be required to be subjected to distortion processing.
In one embodiment, the second determination module comprises at least one of the following sub-modules:
the first judgment submodule is used for judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and when the photographing equipment comprises the wide-angle lens, the image needs to be subjected to distortion processing;
and the second judging submodule is used for judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
In one embodiment, the apparatus further comprises:
the hard decoding module is used for carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image;
or
The soft decoding module is used for carrying out soft decoding on the image to obtain a luminance and chrominance YUV image;
and the conversion module is used for converting the YUV image into an RGB texture image.
In one embodiment, the distortion module comprises:
and the distortion submodule is used for processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
In one embodiment, the distortion submodule processes the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure BDA0001256915880000051
Figure BDA0001256915880000052
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus of image processing, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
and when the coordinates of the pixel points do not belong to a preset watermark region, carrying out distortion processing on the pixel points.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment.
FIG. 2 is a schematic illustration of an image shown according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating an image according to an exemplary embodiment.
FIG. 4 is a schematic illustration of an image shown according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating a method of image processing according to an exemplary embodiment.
FIG. 6 is a flow diagram illustrating a method of image processing according to an exemplary embodiment.
FIG. 7 is a flow diagram illustrating a method of image processing according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
FIG. 11 is a block diagram illustrating a second determination module in accordance with an exemplary embodiment.
Fig. 12 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
Fig. 13 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment.
Fig. 14 is a block diagram illustrating a distortion module in accordance with an exemplary embodiment.
FIG. 15 is a block diagram illustrating an apparatus in accordance with an example embodiment.
FIG. 16 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, some photographed images are distorted, most of which is because the photographing lens is circular and the images are rectangular. The distortion may cause image distortion, affecting image quality. Therefore, distortion processing can be performed on the image to solve the problem. However, there may be watermarks in the image, such as watermarks that indicate the time and place of image capture, etc. The watermark is not shot by a lens, so that distortion does not occur, and the distortion processing on the watermark part can generate adverse effect. Resulting in poor image quality.
In order to solve the above problem, the present embodiment may perform distortion processing on regions other than the watermark in the image, and the watermark portion is kept unchanged, so as to improve the image quality and the display effect.
Fig. 1 is a flowchart illustrating a method of image processing according to an exemplary embodiment, which may be implemented by a device such as a mobile terminal, as shown in fig. 1, and includes the steps of:
in step 101, a pixel point of the image to be distorted is obtained for each pixel point in the image to be distorted.
In step 102, it is determined whether the coordinates of the pixel point belong to a known watermark region.
In step 103, when the coordinates of the pixel point belong to a preset watermark region, the coordinates of the pixel point are kept unchanged.
In step 104, when the coordinates of the pixel point do not belong to the preset watermark region, distortion processing is performed on the pixel point.
Taking a mobile terminal or a camera as an example, the mobile terminal or the camera has a photographing function. After the image is obtained through the lens, according to the preset configuration, the watermark can be added on the image, and the watermark content is time, place and the like. When adding the watermark, a watermark area is configured in advance, and the watermark is added on the image according to the configured watermark area. Thus, the watermark region is pre-configurable and known.
When the distortion processing is carried out on the image to be subjected to the distortion processing, the distortion processing is not carried out on the pixel points in the watermark region, and the pixel points are kept unchanged. And carrying out distortion processing on pixel points outside the watermark region. Therefore, the distortion of the image can be corrected, the watermark area can not be distorted, and the image quality is effectively improved.
In one embodiment, before determining whether the coordinates of the pixel point belong to the known watermark region, the method further includes: and A.
In the step A, according to the preset watermark characteristics, determining the watermark area in the image to be distorted.
Taking a mobile terminal or a computer as an example, the mobile terminal may obtain an image from a channel such as the internet, which may already contain a watermark. Therefore, it is necessary to identify whether a watermark is present in an image and to determine a watermark region when a watermark is present.
A watermark feature library is stored in advance, the watermark takes time and place as an example, and the watermark features comprise the texture features of numbers and characters. Identifying the image, extracting texture features, comparing the extracted texture features with texture features in a watermark feature library, judging whether the extracted texture features contain the texture features of numbers or characters, if so, determining that the texture features are watermarks, and the area where the texture features are located is a watermark area. If not, the image is determined not to contain a watermark.
In one embodiment, before obtaining a pixel point of the image to be distorted, the method further includes: step B1-step B2.
In step B1, it is determined whether or not the obtained image needs to be subjected to distortion processing.
In step B2, when it is determined that distortion processing needs to be performed on the obtained image, the image is determined to be an image to be distortion-processed.
When it is determined that the image does not need to be distorted, the distortion processing may not be performed, and the present flow is ended.
Taking the mobile terminal as an example, when the user opens the photographing application in the mobile terminal, the photographing application enters a photographing mode. At this time, the lens starts to frame, shooting is performed, and the acquired image is stored in the cache. It is equivalent to obtaining an image taken by the photographing apparatus. The mobile terminal can judge whether the image needs to be distorted or not, and when the image needs to be distorted, the image is processed. And displaying the processed image on a display screen. The display screen displays images with high quality, and user experience is improved.
Alternatively, the user browses images on the network through a browser or the like application on the mobile terminal. The mobile terminal downloads the image from the network and stores the image in a local cache, which is equivalent to obtaining the image shot by the shooting device. The mobile terminal can judge whether the image needs to be distorted or not, and when the image needs to be distorted, the image is processed. And displaying the processed image on a display screen.
Or, the user opens the image through the retouching software, which is equivalent to obtaining the image shot by the shooting device. And clicking a distortion processing option in the cropping software by the user, which is equivalent to determining that the image needs to be subjected to distortion processing. And the mobile terminal processes the image. And displaying the processed image to the user again.
Or the intelligent camera shoots images, accesses the Internet in a wireless continuous mode such as WiFi and uploads the images to the server through the Internet. And uploading the image, simultaneously uploading the equipment identifier, the IP (Internet protocol) address and the like of the intelligent camera. The user logs in the server through the application of the intelligent camera on the mobile terminal, and downloads the image from the server, which is equivalent to obtaining the image shot by the shooting device. And performing subsequent processing.
The mobile terminal can determine whether the image is distorted or not and perform distortion Processing through a Graphics Processing Unit (GPU) to relieve Processing pressure of a CPU (central Processing Unit).
The embodiment determines whether to perform distortion processing by identifying whether the image is distorted to reduce erroneous processing.
In one embodiment, step B1 includes at least one of the following steps: step B11 and step B12.
In step B11, it is determined whether the photographing apparatus includes a wide-angle lens according to the apparatus information of the photographing apparatus, and when the photographing apparatus includes the wide-angle lens, it is determined that the image needs to be distorted.
The focal circle of the wide-angle lens is large, and the shot image is easy to distort. The present embodiment can determine whether or not distortion processing is necessary according to the type of the apparatus.
The processing device can know whether the photographing device is a wide-angle lens if the photographing device is installed in the processing device. Or, when the processing device obtains the image, the processing device requests the peer device providing the image to provide the device information of the photographing device. The configuration file of whether various equipment information corresponds to the wide-angle lens is configured in advance.
In step B12, it is determined whether the texture features at the image edge include texture features of a plurality of arcs, and when the texture features at the image edge include texture features of a plurality of arcs, it is determined that the image needs to be distorted.
If the image is distorted, most of the straight lines in the image become arc lines, or the arc lines with small amplitude become arc lines with large amplitude. For such characteristics, the present embodiment can determine whether distortion occurs through the curved texture features. The method includes the steps that model training (for example, a deep learning algorithm is adopted) can be performed in advance through texture features of a large number of normal images and distorted images, and whether the texture features at the edges of the images comprise a plurality of arc-shaped texture features or not can be judged through the trained models. Or, a texture feature library of a large number of distorted images is preset, and whether the texture features at the edges of the images comprise a plurality of arc-shaped texture features or not is judged by matching the texture features of the images with the texture features in the texture feature library.
In one embodiment, before processing the pixel point, the method further includes: step C1, or step C2 and step C3.
In step C1, the image is hard decoded to obtain a red, green, and blue (RGB) texture image.
The image of the embodiment can be an image of an encoding format of H.264 or the like. If the hardware of the processing device supports hard decoding, the RGB texture image can be obtained by using the hard decoding method. Facilitating subsequent distortion treatment.
In step C2, the image is soft decoded to obtain a luminance-chrominance (YUV) image.
In step C3, the YUV image is converted into an RGB texture image.
The image of the embodiment can be an image of an encoding format of H.264 or the like. If the hardware of the processing device does not support hard decoding, a soft decoding method, such as FFMPEG (image processing technique), may be used to perform soft decoding to obtain YUV images, and then convert the YUV images into RGB texture images. Facilitating subsequent distortion treatment.
If the processing device supports both hard decoding and soft decoding, a hard decoding mode can be selected for processing, and the processing efficiency is high.
In one embodiment, step 104 includes: step D1.
In step D1, the pixel point is processed according to the width and height of the image and the distance from the pixel point to the center point of the image.
According to the distortion generation characteristic, the farther the image center point is, the more serious the distortion is. Therefore, in this embodiment, for each pixel point in the image, the pixel point is processed according to the width and height of the image and the distance from the pixel point to the central point of the image. The processing effect is good, and the image quality and the display effect can be effectively improved.
In one embodiment, step D1 includes: step D11.
In step D11, the pixel points are processed according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
As shown in fig. 2, O is the origin of the two-dimensional rectangular coordinate system and is also the center point of the image. M is any pixel point in the image and is a pixel point before processing. M1 is the pixel point obtained by processing and changing the pixel point M. N is the vertex of the image. Where s can be set according to experience or experimental results, for example, s has a value in the range of [1.0, 1.5], such as s ═ 1.1.
Aiming at the characteristic of distortion, the closer to the edge of the image, the larger the distortion degree. Therefore, it can be seen from the above formula that the closer to the edge of the image, the greater the degree of distortion processing, and the better the processing effect.
As shown in fig. 3 and 4, fig. 3 is a distorted image, and the right side frame portion in fig. 3 is clearly seen to be distorted, becoming curved. But the watermark region in the upper left corner of the image is not distorted. After distortion treatment, a graph 4 is obtained, and the right side window frame part in the graph 4 is basically close to a straight line, so that the distortion treatment effect is good. The watermark area is not distorted, and the image quality is maintained.
The implementation is described in detail below by way of several embodiments.
Fig. 5 is a flowchart illustrating a method of image processing according to an exemplary embodiment, which may be implemented by an image processing device such as a mobile terminal, as shown in fig. 5, and includes the steps of:
in step 501, it is determined whether the photographing device includes a wide-angle lens according to the device information of the photographing device, and when the photographing device includes the wide-angle lens, it is determined that distortion processing needs to be performed on the image. Namely, the image is determined to be an image to be distorted. Step 502 is continued. When the photographing apparatus does not include a wide-angle lens, it is determined that distortion processing is not required for the image. And ending the process.
In step 502, a pixel point of the image to be distorted is obtained for each pixel point in the image to be distorted.
In step 503, a watermark region in the image to be distorted is determined according to a preset watermark characteristic.
In step 504, it is determined whether the coordinates of the pixel point belong to a known watermark region. When the coordinates of the pixel point belong to a preset watermark region, continuing to step 505; and when the coordinates of the pixel point do not belong to the preset watermark region, continuing to step 506.
In step 505, the coordinates of the pixel points are kept unchanged.
In step 506, the pixel point is distorted.
Fig. 6 is a flowchart illustrating a method of image processing according to an exemplary embodiment, which may be implemented by an image processing device such as a mobile terminal, as shown in fig. 6, including the steps of:
in step 601, it is determined whether the texture features at the edge of the image include a plurality of arc-shaped texture features, and when the texture features at the edge of the image include a plurality of arc-shaped texture features, it is determined that the image needs to be distorted. Namely, the image is determined to be an image to be distorted. Step 602 is continued. Determining that no distortion processing is required for the image when the texture features at the edges of the image do not include texture features of a plurality of arcs. And ending the process.
In step 602, a pixel point of the image to be distorted is obtained for each pixel point in the image to be distorted.
In step 603, according to a preset watermark characteristic, a watermark region in the image to be distorted is determined.
In step 604, it is determined whether the coordinates of the pixel point belong to a known watermark region. When the coordinates of the pixel point belong to a preset watermark region, continuing to step 605; and continuing to step 606 when the coordinates of the pixel point do not belong to the preset watermark region.
In step 605, the coordinates of the pixel points are kept unchanged.
In step 606, the image is hard decoded to obtain an RGB texture image.
In step 607, the pixel point is distorted.
Fig. 7 is a flowchart illustrating a method of image processing according to an exemplary embodiment, which may be implemented by an image processing device such as a mobile terminal, as illustrated in fig. 7, including the steps of:
in step 701, it is determined whether the photographing device includes a wide-angle lens according to the device information of the photographing device, and when the photographing device includes the wide-angle lens, it is determined that distortion processing needs to be performed on the image. Namely, the image is determined to be an image to be distorted. Step 702 is continued. When the photographing apparatus does not include a wide-angle lens, it is determined that distortion processing is not required for the image. And ending the process.
In step 702, a pixel point of the image to be distorted is obtained for each pixel point in the image to be distorted.
In step 703, a watermark region in the image to be distorted is determined according to a preset watermark feature.
In step 704, it is determined whether the coordinates of the pixel belong to a known watermark region. When the coordinates of the pixel point belong to a preset watermark region, continuing to step 705; and when the coordinates of the pixel point do not belong to the preset watermark region, continuing to step 706.
In step 705, the coordinates of the pixel points are kept unchanged.
In step 706, the image is soft decoded to obtain a YUV image.
In step 707, the YUV image is converted to an RGB texture image.
In step 708, the pixel points are processed according to the width and height of the image and the distance from the pixel points to the center point of the image.
The above embodiments can be combined in various ways according to actual needs.
The implementation of image processing is known from the above description, which is implemented by a mobile terminal or a computer, and the following description is directed to the internal structure and functions of the device.
FIG. 8 is a schematic diagram illustrating an apparatus for image processing according to an exemplary embodiment. Referring to fig. 8, the apparatus includes: an acquisition module 801, a first determination module 802, a holding module 803, and a distortion module 804.
The obtaining module 801 is configured to obtain one pixel point of the image to be distorted for each pixel point in the image to be distorted.
A first determining module 802, configured to determine whether the coordinates of the pixel belong to a known watermark region.
A keeping module 803, configured to keep the coordinates of the pixel unchanged when the coordinates of the pixel belong to a preset watermark region.
And a distortion module 804, configured to perform distortion processing on the pixel point when the coordinate of the pixel point does not belong to the preset watermark region.
In one embodiment, as shown in fig. 9, the apparatus further comprises: an area module 901.
An area module 901, configured to determine a watermark area in the image to be distorted according to a preset watermark characteristic.
In one embodiment, as shown in fig. 10, the apparatus further comprises: a second decision module 1001 and a determination module 1002.
A second determination module 1001 configured to determine whether distortion processing needs to be performed on the obtained image;
the determining module 1002 is configured to determine that the obtained image is an image to be distorted when it is determined that distortion processing needs to be performed on the image.
In one embodiment, as shown in fig. 11, the second determination module 1001 includes at least one of the following sub-modules: a first decision submodule 1101 and a second decision submodule 1102.
The first determining submodule 1101 is configured to determine whether the photographing device includes a wide-angle lens according to the device information of the photographing device, and determine that distortion processing needs to be performed on the image when the photographing device includes the wide-angle lens.
The second determining sub-module 1102 is configured to determine whether texture features at the edge of the image include a plurality of arc-shaped texture features, and determine that distortion processing needs to be performed on the image when the texture features at the edge of the image include the plurality of arc-shaped texture features.
In one embodiment, as shown in fig. 12 and 13, the apparatus further comprises: the hard decoding module 1201, or soft decoding module 1301 and the conversion module 1302.
And a hard decoding module 1201, configured to perform hard decoding on the image to obtain a red, green, and blue RGB texture image.
A soft decoding module 1301, configured to perform soft decoding on the image to obtain a luminance and chrominance YUV image.
A conversion module 1302, configured to convert the YUV image into an RGB texture image.
In one embodiment, as shown in fig. 14, the distortion module 804 includes: distortion sub-module 1401.
And the distortion submodule 1401 is configured to process the pixel according to the width and height of the image and the distance between the pixel and the central point of the image.
In one embodiment, the distortion submodule 1401 processes the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure BDA0001256915880000162
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 15 is a block diagram illustrating an apparatus 1500 for image processing according to an example embodiment. For example, the apparatus 1500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 15, apparatus 1500 may include one or more of the following components: processing components 1502, memory 1504, power components 1506, multimedia components 1508, audio components 1510, input/output (I/O) interfaces 1512, sensor components 1514, and communication components 1516.
The processing component 1502 generally controls overall operation of the device 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1502 may include one or more processors 1520 executing instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1502 may include one or more modules that facilitate interaction between processing component 1502 and other components. For example, processing component 1502 may include a multimedia module to facilitate interaction between multimedia component 1508 and processing component 1502.
The memory 1504 is configured to store various types of data to support operations at the apparatus 1500. Examples of such data include instructions for any application or method operating on the device 1500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1506 provides power to the various components of the device 1500. The power components 1506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the apparatus 1500.
The multimedia component 1508 includes a screen that provides an output interface between the device 1500 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia component 1508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1500 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1510 is configured to output and/or input audio signals. For example, the audio component 1510 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1504 or transmitted via the communication component 1516. In some embodiments, audio component 1510 also includes a speaker for outputting audio signals.
The I/O interface 1512 provides an interface between the processing component 1502 and peripheral interface modules, which can be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1514 includes one or more sensors for providing status assessment of various aspects of the apparatus 1500. For example, the sensor assembly 1514 can detect an open/closed state of the device 1500, the relative positioning of components, such as a display and keypad of the device 1500, the sensor assembly 1514 can also detect a change in position of the device 1500 or a component of the device 1500, the presence or absence of user contact with the device 1500, orientation or acceleration/deceleration of the device 1500, and a change in temperature of the device 1500. The sensor assembly 1514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1516 is configured to facilitate wired or wireless communication between the apparatus 1500 and other devices. The apparatus 1500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1516 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1504 comprising instructions, executable by the processor 1520 of the apparatus 1500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
An apparatus for image processing, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
and when the coordinates of the pixel points do not belong to a preset watermark region, carrying out distortion processing on the pixel points.
The processor may be further configured to:
before the determining whether the coordinates of the pixel point belong to the known watermark region, the method further includes:
and determining a watermark area in the image to be distorted according to a preset watermark characteristic.
The processor may be further configured to:
before obtaining a pixel point of the image to be distorted, the method further comprises:
judging whether the obtained image needs to be subjected to distortion processing or not;
and when the acquired image needs to be subjected to distortion processing, determining the image as an image to be subjected to distortion processing.
The processor may be further configured to:
the judging whether the image needs to be distorted at least comprises one of the following steps:
judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and determining that the image needs to be subjected to distortion processing when the photographing equipment comprises the wide-angle lens;
and judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
The processor may be further configured to:
before determining that the image is to-be-distorted, the method further comprises:
carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image;
or
Performing soft decoding on the image to obtain a luminance and chrominance YUV image;
and converting the YUV image into an RGB texture image.
The processor may be further configured to:
and carrying out distortion processing on the pixel points, wherein the distortion processing comprises the following steps:
and processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
The processor may be further configured to:
the processing the pixel point according to the width and the height of the image and the distance from the pixel point to the central point of the image comprises the following steps:
processing the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure BDA0001256915880000201
Figure BDA0001256915880000202
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of image processing, the method comprising:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
and when the coordinates of the pixel points do not belong to a preset watermark region, carrying out distortion processing on the pixel points.
The instructions in the storage medium may further include:
before the determining whether the coordinates of the pixel point belong to the known watermark region, the method further includes:
and determining a watermark area in the image to be distorted according to a preset watermark characteristic.
The instructions in the storage medium may further include:
before obtaining a pixel point of the image to be distorted, the method further comprises:
judging whether the obtained image needs to be subjected to distortion processing or not;
and when the acquired image needs to be subjected to distortion processing, determining the image as an image to be subjected to distortion processing.
The instructions in the storage medium may further include:
the judging whether the image needs to be distorted at least comprises one of the following steps:
judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and determining that the image needs to be subjected to distortion processing when the photographing equipment comprises the wide-angle lens;
and judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
The instructions in the storage medium may further include:
before determining that the image is to-be-distorted, the method further comprises:
carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image;
or
Performing soft decoding on the image to obtain a luminance and chrominance YUV image;
and converting the YUV image into an RGB texture image.
The instructions in the storage medium may further include:
and carrying out distortion processing on the pixel points, wherein the distortion processing comprises the following steps:
and processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
The instructions in the storage medium may further include:
the processing the pixel point according to the width and the height of the image and the distance from the pixel point to the central point of the image comprises the following steps:
processing the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure BDA0001256915880000221
Figure BDA0001256915880000222
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
Fig. 16 is a block diagram illustrating an apparatus 1600 for image processing according to an example embodiment. For example, the apparatus 1600 may be provided as a computer. Referring to fig. 16, apparatus 1600 includes a processing component 1622 that further includes one or more processors and memory resources, represented by memory 1632, for storing instructions, such as applications, that are executable by processing component 1622. The application programs stored in memory 1632 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1622 is configured to execute instructions to perform the above-described method image processing.
The apparatus 1600 may also include a power component 1626 configured to perform power management for the apparatus 1600, a wired or wireless network interface 1650 configured to connect the apparatus 1600 to a network, and an input/output (I/O) interface 1658. The apparatus 1600 may operate based on an operating system stored in the memory 1632, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method of image processing, comprising:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
when the coordinates of the pixel points do not belong to a preset watermark region, distortion processing is carried out on the pixel points;
before obtaining a pixel point of the image to be distorted, the method further comprises:
judging whether the obtained image needs to be subjected to distortion processing or not;
when the acquired image needs to be subjected to distortion processing, determining the image as an image to be subjected to distortion processing;
the judging whether the obtained image needs to be subjected to distortion processing comprises the following steps:
judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and determining that the image needs to be subjected to distortion processing when the photographing equipment comprises the wide-angle lens; the device information indicates whether a profile of the wide-angle lens is included.
2. The method of claim 1, wherein before determining whether the coordinates of the pixel point belong to a known watermark region, the method further comprises:
and determining a watermark area in the image to be distorted according to a preset watermark characteristic.
3. The method of claim 1, wherein the determining whether distortion processing is required for the obtained image further comprises:
and judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
4. The method of image processing according to claim 1, wherein before determining that the image is an image to be processed for distortion, the method further comprises:
carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image;
or
Performing soft decoding on the image to obtain a luminance and chrominance YUV image;
and converting the YUV image into an RGB texture image.
5. The method of claim 1, wherein the distorting the pixel point comprises:
and processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
6. The method of claim 5, wherein the processing the pixel points according to the width and height of the image and the distance from the pixel points to the center point of the image comprises:
processing the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure FDA0002246526550000021
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
7. An apparatus for image processing, comprising:
the acquisition module is used for acquiring one pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
the first judgment module is used for judging whether the coordinates of the pixel points belong to a known watermark region;
the maintaining module is used for maintaining the coordinates of the pixel points unchanged when the coordinates of the pixel points belong to a preset watermark region;
the distortion module is used for carrying out distortion processing on the pixel points when the coordinates of the pixel points do not belong to a preset watermark region;
wherein the apparatus further comprises:
the second judgment module is used for judging whether the acquired image needs to be subjected to distortion processing or not;
the determining module is used for determining the image to be subjected to distortion processing when the image to be obtained is determined to be required to be subjected to distortion processing;
the second judging module includes:
the first judgment submodule is used for judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and when the photographing equipment comprises the wide-angle lens, the image needs to be subjected to distortion processing; the device information indicates whether a profile of the wide-angle lens is included.
8. The apparatus for image processing according to claim 7, further comprising:
and the area module is used for determining the watermark area in the image to be distorted according to the preset watermark characteristics.
9. The apparatus of claim 7, wherein the second determining module further comprises:
and the second judging submodule is used for judging whether the texture features at the edge of the image comprise a plurality of arc-shaped texture features or not, and determining that the image needs to be distorted when the texture features at the edge of the image comprise the plurality of arc-shaped texture features.
10. The apparatus for image processing according to claim 7, further comprising:
the hard decoding module is used for carrying out hard decoding on the image to obtain a red, green and blue (RGB) texture image; or
The soft decoding module is used for carrying out soft decoding on the image to obtain a luminance and chrominance YUV image;
and the conversion module is used for converting the YUV image into an RGB texture image.
11. The apparatus of claim 7, wherein the distortion module comprises:
and the distortion submodule is used for processing the pixel points according to the width and the height of the image and the distance from the pixel points to the central point of the image.
12. The apparatus according to claim 11, wherein the distortion submodule processes the pixel points according to the following formula:
x1=x·a;
y1=y·a;
a=arctang(r1)/r1;
r1=r/s;
Figure FDA0002246526550000041
Figure FDA0002246526550000042
the method comprises the steps of taking a central point of an image as an origin of a two-dimensional rectangular coordinate system, (x, y) are coordinates of pixel points before processing, (x1, y1) are coordinates of the pixel points after processing, a is a distortion processing coefficient, R is a distance from the pixel points (x, y) to the central point of the image before processing, R1 is a distance from the pixel points (x1, y1) to the central point of the image after processing, R is a distance from a vertex of the image to the central point of the image, w is a width of the image, h is a height of the image, and s is a preset effect parameter.
13. An apparatus for image processing, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
obtaining a pixel point of the image to be distorted aiming at each pixel point in the image to be distorted;
judging whether the coordinates of the pixel points belong to a known watermark region or not;
when the coordinates of the pixel points belong to a preset watermark region, keeping the coordinates of the pixel points unchanged;
when the coordinates of the pixel points do not belong to a preset watermark region, distortion processing is carried out on the pixel points;
wherein, prior to said obtaining a pixel point of the image to be distorted, the processor is configured to:
judging whether the obtained image needs to be subjected to distortion processing or not;
when the acquired image needs to be subjected to distortion processing, determining the image as an image to be subjected to distortion processing;
the judging whether the obtained image needs to be subjected to distortion processing comprises the following steps:
judging whether the photographing equipment comprises a wide-angle lens or not according to the equipment information of the photographing equipment, and determining that the image needs to be subjected to distortion processing when the photographing equipment comprises the wide-angle lens; the device information indicates whether a profile of the wide-angle lens is included.
14. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6.
CN201710193987.7A 2017-03-28 2017-03-28 Image processing method and device Active CN107025638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710193987.7A CN107025638B (en) 2017-03-28 2017-03-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710193987.7A CN107025638B (en) 2017-03-28 2017-03-28 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107025638A CN107025638A (en) 2017-08-08
CN107025638B true CN107025638B (en) 2020-02-07

Family

ID=59525471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710193987.7A Active CN107025638B (en) 2017-03-28 2017-03-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107025638B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192190B (en) * 2019-12-31 2023-05-12 北京金山云网络技术有限公司 Method and device for eliminating image watermark and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1653479A (en) * 2002-03-11 2005-08-10 数字验证有限公司 Currency verification
CN1830002A (en) * 2003-07-28 2006-09-06 奥林巴斯株式会社 Image processing apparatus, image processing method, and distortion correcting method
CN102456212A (en) * 2010-10-19 2012-05-16 北大方正集团有限公司 Separation method and system for visible watermark in numerical image
CN103369192A (en) * 2012-03-31 2013-10-23 深圳市振华微电子有限公司 Method and device for Full-hardware splicing of multichannel video images
CN104899821A (en) * 2015-05-27 2015-09-09 合肥高维数据技术有限公司 Method for erasing visible watermark of document image
CN105096269A (en) * 2015-07-21 2015-11-25 北京交通大学 Radial image distortion rectifying method and system based on distorted linear structure detection
CN105141826A (en) * 2015-06-30 2015-12-09 广东欧珀移动通信有限公司 Distortion correction method and terminal
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 A kind of method and device searching distorted region in image
CN106407919A (en) * 2016-09-05 2017-02-15 珠海赛纳打印科技股份有限公司 Image processing-based text separation method, device and image forming device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1653479A (en) * 2002-03-11 2005-08-10 数字验证有限公司 Currency verification
CN1830002A (en) * 2003-07-28 2006-09-06 奥林巴斯株式会社 Image processing apparatus, image processing method, and distortion correcting method
CN102456212A (en) * 2010-10-19 2012-05-16 北大方正集团有限公司 Separation method and system for visible watermark in numerical image
CN103369192A (en) * 2012-03-31 2013-10-23 深圳市振华微电子有限公司 Method and device for Full-hardware splicing of multichannel video images
CN104899821A (en) * 2015-05-27 2015-09-09 合肥高维数据技术有限公司 Method for erasing visible watermark of document image
CN105141826A (en) * 2015-06-30 2015-12-09 广东欧珀移动通信有限公司 Distortion correction method and terminal
CN105096269A (en) * 2015-07-21 2015-11-25 北京交通大学 Radial image distortion rectifying method and system based on distorted linear structure detection
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 A kind of method and device searching distorted region in image
CN106407919A (en) * 2016-09-05 2017-02-15 珠海赛纳打印科技股份有限公司 Image processing-based text separation method, device and image forming device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"智能监控系统中跨平台播放器的设计与实现";彭爽;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140815;摘要,第2-5章 *

Also Published As

Publication number Publication date
CN107025638A (en) 2017-08-08

Similar Documents

Publication Publication Date Title
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
US20210097715A1 (en) Image generation method and device, electronic device and storage medium
US9674395B2 (en) Methods and apparatuses for generating photograph
CN108154465B (en) Image processing method and device
WO2016011747A1 (en) Skin color adjustment method and device
CN104918107B (en) The identification processing method and device of video file
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN106131441B (en) Photographing method and device and electronic equipment
CN108154466B (en) Image processing method and device
CN105528765B (en) Method and device for processing image
CN109509195B (en) Foreground processing method and device, electronic equipment and storage medium
CN110728180B (en) Image processing method, device and storage medium
CN110619610B (en) Image processing method and device
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN112634160A (en) Photographing method and device, terminal and storage medium
CN105574834B (en) Image processing method and device
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN107292901B (en) Edge detection method and device
WO2019052449A1 (en) Skin color recognition method and apparatus, and storage medium
CN107025638B (en) Image processing method and device
CN107451972B (en) Image enhancement method, device and computer readable storage medium
CN107563957B (en) Eye image processing method and device
CN112016595A (en) Image classification method and device, electronic equipment and readable storage medium
CN114418865A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant