CN108154534B - Picture definition calibration method, terminal and computer readable storage medium - Google Patents

Picture definition calibration method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN108154534B
CN108154534B CN201711433510.8A CN201711433510A CN108154534B CN 108154534 B CN108154534 B CN 108154534B CN 201711433510 A CN201711433510 A CN 201711433510A CN 108154534 B CN108154534 B CN 108154534B
Authority
CN
China
Prior art keywords
shot picture
picture
corners
shot
resolutions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711433510.8A
Other languages
Chinese (zh)
Other versions
CN108154534A (en
Inventor
马亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711433510.8A priority Critical patent/CN108154534B/en
Publication of CN108154534A publication Critical patent/CN108154534A/en
Application granted granted Critical
Publication of CN108154534B publication Critical patent/CN108154534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T5/70

Abstract

The embodiment of the invention discloses a picture definition calibration method, which comprises the steps of obtaining the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card; when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture; calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture; and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value. The embodiment of the invention also discloses a terminal and a computer readable storage medium, which can improve the definition of the edge of the picture, thereby realizing the consistency of the definition of the edge of the picture.

Description

Picture definition calibration method, terminal and computer readable storage medium
Technical Field
The present invention relates to information processing technologies in the field of communications, and in particular, to a method, a terminal, and a computer-readable storage medium for calibrating picture sharpness.
Background
Because the optical centers of the lens and the sensor (sensor) in the module are not coaxial, thereby causing the edge definition of the shot picture to be inconsistent, therefore, the shot picture needs to be denoised, the filter is usually adopted to denoise the shot picture, for example, the Gaussian filter is selected to denoise the shot picture, the Gaussian filter has a very important parameter, the parameter has a very important influence on the denoising effect of the shot picture, if the value of the parameter is higher, the larger the intensity value set when the denoising treatment is carried out on the shot picture is represented, the better the denoising effect is, if the value of the parameter is smaller, the smaller the intensity value set when the denoising treatment is carried out on the shot picture is represented, and the denoising effect is relatively poor.
In the prior art, in order to obtain parameters, the coordinate information of each pixel point of a shot picture is usually brought into a gaussian filter function, so that the parameters are obtained, but the parameters obtained in this way are a constant, so that the noise reduction processing intensity value of each pixel point of the shot picture is the same, and the method can only ensure that the definition of the distance from the center point of the shot picture is higher, so that the definition of the edge of the shot picture is lower.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention provide a method, a terminal, and a computer-readable storage medium for calibrating picture sharpness, which can improve picture edge sharpness, thereby achieving consistent picture edge sharpness.
The technical scheme of the invention is realized as follows:
in one aspect, an embodiment of the present invention provides a method for calibrating picture sharpness, where the method includes:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
In the above solution, the moving the center point of the captured picture according to the resolutions of the four corners of the captured picture includes:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
In the above solution, the calculating a lateral shift factor and a longitudinal shift factor according to resolutions of four corners of the captured picture includes:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
In the foregoing solution, the determining the noise reduction processing strength value of each pixel point of the shot picture according to the radial length value includes:
determining a filter parameter selected by each pixel point of the shot picture according to a radial length value from the center point of the shot picture after the movement to each pixel point of the shot picture;
and determining the noise reduction processing intensity value of each pixel point of the shot picture according to the filter parameters.
In the above scheme, the determining, according to the radial length value from the center point of the moved shot picture to each pixel point of the shot picture, a filter parameter selected by each pixel point of the shot picture includes:
obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture;
and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the shot picture to each pixel point of the shot picture after the movement and the maximum radial length value.
In one aspect, a terminal, the terminal comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the picture definition calibration program stored in the memory so as to realize the following steps:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
In the above solution, the processor is configured to execute an information processing program stored in the memory to implement the following steps:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
In the above solution, the processor is further configured to execute the information processing program stored in the memory to implement the following steps:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
In the above solution, the processor is further configured to execute the information processing program stored in the memory to implement the following steps:
determining a filter parameter selected by each pixel point of the shot picture according to a radial length value from the center point of the shot picture after the movement to each pixel point of the shot picture;
and determining the noise reduction processing intensity value of each pixel point of the shot picture according to the filter parameters.
In one aspect, embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the method as described in any one of the above.
The embodiment of the invention provides a picture definition calibration method, a terminal and a computer readable storage medium, which are used for obtaining the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card; when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture; calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture; and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value. According to the method, the terminal and the computer-readable storage medium for calibrating the picture definition, provided by the embodiment of the invention, the noise reduction processing strength value of each pixel point of the shot picture is determined according to the radial length value, namely different noise reduction processing strength values are determined according to different radial length values, so that the picture edge definition can be improved, and the picture edge definition consistency is further realized.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a first schematic flow chart of a method for calibrating picture sharpness according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a picture sharpness calibration method according to an embodiment of the present invention;
FIG. 5 is a diagram of an ISO12233 resolution card provided by an embodiment of the present invention;
FIG. 6 is a diagram illustrating a measurement of resolution of four corners of a captured picture according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating moving the center point of a captured picture according to the resolutions of the four corners of the captured picture according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processors; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving GateWay) 2034, a PGW (PDN GateWay) 2035, and a PCRF (Policy and charging rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
The embodiment of the invention provides a method for calibrating picture definition, which is applied to a terminal, the function realized by the method can be realized by calling a program code through a processor in the terminal, the program code can be saved in a computer storage medium, and the terminal at least comprises the processor and the storage medium. Fig. 3 is a schematic flow chart of an implementation of a method for calibrating picture sharpness according to an embodiment of the present invention, and as shown in fig. 3, the method may include the following steps:
step 301, obtaining the resolution of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card.
Before obtaining the resolutions of the four corners of the shot picture, the method further comprises: and collecting shot pictures.
The shot picture is a test picture of a standard resolution card and is composed of a plurality of pixel points.
Wherein the standard resolution card comprises: ISO12233 resolution cards, sfr (special function register) resolution cards, and mtf (modulation Transfer function) resolution cards, which are not limited herein.
Specifically, the execution main body of the method provided by the embodiment of the invention is a terminal, and the terminal can be an electronic device such as a mobile phone, a tablet computer, a notebook computer and the like. The terminal is provided with an image acquisition unit and can be specifically realized through a camera. When a user holds the terminal, an image acquisition unit in the terminal acquires images of the standard resolution card in front of the user to obtain a test image of the standard resolution card.
Wherein, the obtaining the resolutions of the four corners of the shot picture specifically may include: the user measures the resolutions of the four corners of the shot picture by using a standard resolution card, and the processor in the terminal processes the measured resolutions of the four corners of the shot picture and acquires the resolutions of the four corners of the shot picture.
Specifically, the standard resolution card can be pasted on a plane panel, the terminal is placed at a position away from the plane panel by a certain distance, then the standard resolution card on the plane panel is shot by a camera in the terminal, so that a test picture of the standard resolution card is obtained, and finally, the resolution of four corners of the shot picture is calculated by using HYRes software.
Here, the resolution of the four corners of the shot picture is calculated by using the HYRes software, which belongs to the prior art, and the embodiment of the present invention is not described herein again.
And step 302, when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture.
Here, after the resolutions of the four corners of the photographed picture are obtained, if the resolutions of any two of the four corners of the photographed picture are different, the center point of the photographed picture is moved according to the resolutions of the four corners of the photographed picture.
Wherein, moving the center point of the photographed picture according to the resolutions of the four corners of the photographed picture may specifically include:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
Wherein, the calculating the horizontal offset factor and the vertical offset factor according to the resolution of the four corners of the shot picture comprises:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
Step 303, calculating a radial length value from the center point of the moved shot picture to each pixel point of the shot picture.
Specifically, after the center point of the shot picture is moved according to the resolutions of the four corners of the shot picture, the radial length value from the moved center point of the shot picture to each pixel point of the shot picture can be calculated by using a distance formula between the two points, and here, the number of the pixel points of the shot picture is the number of the pixel points, so that the radial length values of the corresponding number can be finally obtained, and the radial length values of the corresponding number are recorded.
The calculating, by using a distance formula between two points, a radial length value from the center point of the moved photographed picture to each pixel point of the photographed picture may specifically include: acquiring a correction coordinate value of a central point, acquiring a coordinate value of each pixel point of a shot picture, calculating the correction coordinate value of the central point and the coordinate value distance of each pixel point of the shot picture by using a distance formula between the two points, wherein the calculated result is the radial length value from the central point of the shot picture to each pixel point of the shot picture after moving. Here, the step of obtaining the correction coordinate value of the central point and the coordinate value of each pixel point of the shot picture, and the step of calculating the distance from the correction coordinate value of the central point to the coordinate value of each pixel point of the shot picture by using the distance formula between the two points is realized by calling a program code by a processor in the terminal.
And 304, determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
Specifically, after obtaining the radial length value from the center point of the moved shot picture to each pixel point of the shot picture, obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture; and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the moved shot picture to each pixel point of the shot picture and the maximum radial length value, and determining a noise reduction processing strength value of each pixel point of the shot picture according to the filter parameter, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
The filter may specifically include: an ideal low pass filter, a butterworth low pass filter, and a gaussian filter, and embodiments of the present invention are not limited in any way herein.
Here, after determining the noise reduction processing intensity value of each pixel point of the captured picture, specifically, MATLAB software or OpenCV software may be used to perform noise reduction processing on each pixel point of the captured picture, where a method for performing noise reduction on each pixel point of the captured picture by using MATLAB software or OpenCV software belongs to the prior art, and specifically, spatial domain filtering, transform domain filtering, partial differential equation, variational method, and morphological noise filter in the prior art may be used to perform noise reduction on four corners of the image, which is not described herein in detail in the embodiments of the present invention.
According to the method for calibrating the image definition provided by the embodiment of the invention, the noise reduction processing strength value of each pixel point of the shot image is determined according to the radial length value, namely different noise reduction processing strength values are determined according to different radial length values, so that the image edge definition can be improved, and the image edge definition consistency is further realized.
The embodiment of the invention also provides a picture definition calibration method, which is applied to a terminal, the functions realized by the method can be realized by calling the program codes through a processor in the terminal, and the program codes can be saved in a computer storage medium. As shown in fig. 4, the method for calibrating the definition of a picture provided by the embodiment of the present invention includes the following steps:
step 401, collecting a shot picture.
The shot picture is a test picture of a standard resolution card and is composed of a plurality of pixel points.
Wherein the standard resolution card comprises: an ISO12233 resolution card, an sfr (special Function register) resolution card, and an MTF (modulation Transfer Function) resolution card, where the ISO12233 resolution card has features common to the sfr (special Function register) resolution card and the MTF resolution card, the ISO12233 resolution card is the ISO12233 resolution card, and fig. 5 is a schematic diagram of the ISO12233 resolution card used in the embodiment of the present invention.
Specifically, the execution main body of the method provided by the embodiment of the invention is a terminal, and the terminal can be an electronic device such as a mobile phone, a tablet computer, a notebook computer and the like. The terminal is provided with an image acquisition unit and can be specifically realized through a camera. When a user holds the terminal, an image acquisition unit in the terminal acquires images of the ISO12233 resolution card in front of the user to obtain a test image of the ISO12233 resolution card.
And step 402, acquiring the resolution of four corners of the shot picture.
Wherein, the obtaining the resolutions of the four corners of the shot picture specifically may include: the user measures the resolutions of the four corners of the shot picture by using a standard resolution card, and the processor in the terminal processes the measured resolutions of the four corners of the shot picture and acquires the resolutions of the four corners of the shot picture. Here, although the resolution of the center point of the captured picture is generally not used when the noise reduction processing is performed on the captured picture, the resolution of the center point of the captured picture is generally measured at the same time as the resolutions of the four corners of the captured picture.
Here, before the measuring the resolutions of the four corners of the taken picture using the standard resolution card, the method further includes: the user selects a flat panel and checks whether the size of the flat panel satisfies a first preset condition.
Wherein the first preset condition is as follows: greater than or equal to a first preset parameter threshold value, which is: the standard resolution card is a standard sample card, and the size of the standard sample card produced by a manufacturer is generally fixed, so that the standard resolution card can be directly used by a user at a later stage.
For example, if the size of the ISO12233 resolution card is X, the selected flat panel is at least greater than or equal to X, so as to ensure that the ISO12233 resolution card can be completely pasted on the flat panel.
Illustratively, as shown in fig. 6, the ISO12233 resolution card is pasted on a flat panel, the terminal is placed at a certain distance from the flat panel, then a camera in the terminal is used for shooting the standard resolution card on the flat panel, so as to obtain a test picture of the ISO12233 resolution card, and finally the resolution of the four corners of the shot picture and the resolution of the center point of the shot picture are calculated by using the HYRes software.
Here, the resolution from the center of the shot picture to the four corners of the shot picture is calculated by using the HYRes software, which belongs to the prior art, and the embodiment of the present invention is not described herein again.
And step 403, when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture.
As shown in fig. 7, moving the center point of the captured picture according to the resolutions of the four corners of the captured picture may specifically include:
(1) and respectively marking the four corners of the shot picture obtained in the step 402 with the resolutions as: a. b, c, d;
(2) determining the average value E of the resolution of the four corners according to the following formula:
Figure BDA0001525382570000151
(3) determining a lateral offset factor f according towLongitudinal offset factor fh
Figure BDA0001525382570000152
Figure BDA0001525382570000153
(4) Acquiring the number w of pixel points on the wide side of the shot picture and the number h of pixel points on the long side of the shot picture, and then obtaining the coordinate value (x) of the central point of the shot pictureo,yo) Respectively as follows:
xo=[w/2],yo=[h/2]
wherein [ ] is a rounding symbol;
(5) according to the lateral offset factor fwThe longitudinal offset factor fhDetermining a correction coordinate value of the central point of the shot picture according to the pixel point number w of the wide side of the shot picture and the pixel point number h of the long side of the shot picture, and specifically determining the correction coordinate value (x, y) of the central point of the shot picture according to the following formula:
Figure BDA0001525382570000154
Figure BDA0001525382570000155
wherein, α is an adjustment factor, a general value range is (0, 1), which can be set randomly in the value range according to user preference, and a default value is 0.5.
Step 404, calculating a radial length value from the center point of the moved shot picture to each pixel point of the shot picture.
Specifically, as shown in fig. 7, the coordinate value of the center point of the moved shot picture, that is, the corrected coordinate value of the center point obtained in step 403 is the coordinate value of the center point
Figure BDA0001525382570000161
Simultaneously obtaining the coordinate value (x) of each pixel point of the shot picturei,yi) Using the formula of the distance between two points
Figure BDA0001525382570000162
Calculating the center point of the moved photographed picture to the photographed pictureAnd the distance between each pixel point is the radial length value from the center point of the moved shot picture to each pixel point of the shot picture, and here, the radial length value of the corresponding number can be finally obtained according to the number of the pixel points of the shot picture. Here, obtaining the coordinate value of each pixel point of the shot picture, and calculating the distance from the center point of the shot picture after the movement to each pixel point of the shot picture by using a distance formula between two points is realized by calling a program code by a processor in the terminal.
And 405, determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
The determining the denoising strength value of each pixel point of the shot picture according to the radial length value may specifically include: obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture; and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the moved shot picture to each pixel point of the shot picture and the maximum radial length value, and determining a noise reduction processing strength value of each pixel point of the shot picture according to the filter parameter, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
Specifically, in the embodiment of the present invention, a gaussian filter is selected to perform noise reduction processing on each pixel point of a shot picture, and the gaussian filter has a very important parameter: the standard deviation is larger, the frequency band of the Gaussian filter is wider, the noise reduction processing strength value is larger when each pixel point of the shot picture is subjected to noise reduction processing, the shot picture is clearer, but in the prior art, the constant is usually adopted, so that the noise reduction processing strength value is the same when each pixel point of the shot picture is subjected to noise reduction processing, and the noise reduction effect of the edge pixel point of the shot picture is realizedThe standard deviation is in direct proportion to the radial length value i of the current pixel point of the shot picture and the longest radial length L of the shot picture, namely the standard deviation is not consistent with the edge definition of the shot picture
Figure BDA0001525382570000171
The corresponding standard deviation can be selected for each pixel point of the shot picture, and the standard deviation is brought into a Gaussian function, so that the noise reduction processing strength value of each pixel point of the shot picture is obtained, and the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
According to the method for calibrating the image definition provided by the embodiment of the invention, the noise reduction processing strength value of each pixel point of the shot image is determined according to the radial length value, namely different noise reduction processing strength values are determined according to different radial length values, so that the image edge definition can be improved, and the image edge definition consistency is further realized.
An embodiment of the present invention further provides a terminal 50, fig. 8 is a schematic diagram of a structure of a terminal according to an embodiment of the present invention, and as shown in fig. 8, the terminal 50 includes: a processor 501, a memory 502, and a communication bus 503;
the communication bus 503 is used for realizing connection communication between the processor 501 and the memory 502;
the processor 501 is configured to execute a picture sharpness calibration program stored in the memory 502 to implement the following steps:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
Further, the processor 501 is configured to execute the information processing program stored in the memory 502 to implement the following steps:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
Further, the processor 501 is further configured to execute the information processing program stored in the memory 502 to implement the following steps:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
Further, the processor 501 is further configured to execute the information processing program stored in the memory 502 to implement the following steps:
determining a filter parameter selected by each pixel point of the shot picture according to a radial length value from the center point of the shot picture after the movement to each pixel point of the shot picture;
and determining the noise reduction processing intensity value of each pixel point of the shot picture according to the filter parameters.
Further, the processor 501 is further configured to execute the information processing program stored in the memory 502 to implement the following steps:
obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture;
and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the shot picture to each pixel point of the shot picture after the movement and the maximum radial length value.
The memory 502 in embodiments of the present invention is used to store various types of data to support the operation of the terminal 50. Examples of such data include: any computer program for operating on the terminal 50, such as an operating system and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. The program for implementing the method of the embodiment of the present invention may be included in the application program.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
Specifically, the above description of the terminal embodiment is similar to the description of the method embodiment, and for technical details that are not disclosed in the terminal embodiment of the present invention, reference may be made to the description of the information processing method embodiment, and no further description is given to the embodiment of the present invention.
According to the terminal provided by the embodiment of the invention, the noise reduction processing strength value of each pixel point of the shot picture is determined according to the radial length value, namely different noise reduction processing strength values are determined according to different radial length values, so that the edge definition of the picture can be improved, and the consistency of the edge definition of the picture is further realized.
In the embodiment of the present invention, if the information processing method is implemented in the form of a software functional module and sold or used as an independent product, the information processing method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
An embodiment of the present invention provides a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
and determining a noise reduction processing strength value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing strength value.
The above description of the computer-readable storage medium embodiments, similar to the above description of the method, has the same beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the computer-readable storage medium of the present invention, a person skilled in the art shall understand with reference to the description of the embodiments of the method of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises such elements.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present invention may be integrated into one processor, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (7)

1. A method for calibrating picture sharpness, the method comprising:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
determining a noise reduction processing intensity value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing intensity value;
determining a denoising strength value of each pixel point of the shot picture according to the radial length value comprises:
determining a filter parameter selected by each pixel point of the shot picture according to a radial length value from the center point of the shot picture after the movement to each pixel point of the shot picture;
determining a noise reduction processing intensity value of each pixel point of the shot picture according to the filter parameter;
wherein, the determining the filter parameter selected by each pixel point of the shot picture according to the radial length value from the moved central point of the shot picture to each pixel point of the shot picture comprises: obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture; and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the shot picture to each pixel point of the shot picture after the movement and the maximum radial length value.
2. The method of claim 1, wherein moving the center point of the captured picture according to the resolutions of the four corners of the captured picture comprises:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
3. The method of claim 2, wherein the calculating a lateral shift factor and a longitudinal shift factor according to the resolutions of the four corners of the captured picture comprises:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
4. A terminal, characterized in that the terminal comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the picture definition calibration program stored in the memory so as to realize the following steps:
acquiring the resolutions of four corners of a shot picture, wherein the shot picture is a test picture of a standard resolution card;
when the resolutions of any two of the four corners of the shot picture are different, moving the central point of the shot picture according to the resolutions of the four corners of the shot picture;
calculating the radial length value from the center point of the moved shot picture to each pixel point of the shot picture;
determining a noise reduction processing intensity value of each pixel point of the shot picture according to the radial length value, so that the shot picture is subjected to noise reduction processing according to the noise reduction processing intensity value;
determining a denoising strength value of each pixel point of the shot picture according to the radial length value comprises:
determining a filter parameter selected by each pixel point of the shot picture according to a radial length value from the center point of the shot picture after the movement to each pixel point of the shot picture;
determining a noise reduction processing intensity value of each pixel point of the shot picture according to the filter parameter;
wherein, the determining the filter parameter selected by each pixel point of the shot picture according to the radial length value from the moved central point of the shot picture to each pixel point of the shot picture comprises: obtaining the maximum radial length value from the center point of the moved shot picture to the radial length value of each pixel point of the shot picture; and determining a filter parameter selected by each pixel point of the shot picture according to the radial length value from the center point of the shot picture to each pixel point of the shot picture after the movement and the maximum radial length value.
5. The terminal of claim 4, wherein the processor is configured to execute an information processing program stored in the memory to perform the steps of:
calculating a transverse offset factor and a longitudinal offset factor according to the resolution of four corners of the shot picture;
acquiring the number of pixel points on the wide side of a shot picture and the number of pixel points on the long side of the shot picture;
and determining a correction coordinate value of the central point according to the transverse offset factor, the longitudinal offset factor, the number of pixel points on the wide side of the shot picture and the number of pixel points on the long side of the shot picture, and moving the central point of the shot picture according to the correction coordinate value of the central point.
6. The terminal of claim 5, wherein the processor is further configured to execute an information processing program stored in the memory to perform the steps of:
calculating the average value of the resolutions of the four corners according to the resolutions of the four corners of the shot picture;
and determining a transverse offset factor and a longitudinal offset factor according to the average value of the resolutions and the resolutions of the four corners of the shot picture.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the method according to any one of claims 1 to 3.
CN201711433510.8A 2017-12-26 2017-12-26 Picture definition calibration method, terminal and computer readable storage medium Active CN108154534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711433510.8A CN108154534B (en) 2017-12-26 2017-12-26 Picture definition calibration method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711433510.8A CN108154534B (en) 2017-12-26 2017-12-26 Picture definition calibration method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108154534A CN108154534A (en) 2018-06-12
CN108154534B true CN108154534B (en) 2020-09-01

Family

ID=62462889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711433510.8A Active CN108154534B (en) 2017-12-26 2017-12-26 Picture definition calibration method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108154534B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758123A (en) * 2023-04-25 2023-09-15 威海凯思信息科技有限公司 Ocean wave image processing method and device and server

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791232A (en) * 2005-11-16 2006-06-21 天津大学 High-precision HDTV image definition test pattern and its forming method
WO2008018771A1 (en) * 2006-08-11 2008-02-14 Mtekvision Co., Ltd. Image noise reduction apparatus and method, recorded medium recorded the program performing it
CN101183175A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Optical aberration correcting system and method of digital cameras
CN101840576A (en) * 2010-05-12 2010-09-22 浙江大学 Method for visually testing resolution of each imaging area of digital camera
CN102265595A (en) * 2008-12-24 2011-11-30 株式会社理光 Method and apparatus for image processing and on-vehicle camera apparatus
CN103686104A (en) * 2012-09-26 2014-03-26 株式会社日立制作所 Image processing apparatus
CN106023193A (en) * 2016-05-18 2016-10-12 东南大学 Array camera observation method for detecting structure surface in turbid media
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148500A (en) * 2005-11-24 2007-06-14 Olympus Corp Image processor and image processing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791232A (en) * 2005-11-16 2006-06-21 天津大学 High-precision HDTV image definition test pattern and its forming method
WO2008018771A1 (en) * 2006-08-11 2008-02-14 Mtekvision Co., Ltd. Image noise reduction apparatus and method, recorded medium recorded the program performing it
CN101183175A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Optical aberration correcting system and method of digital cameras
CN102265595A (en) * 2008-12-24 2011-11-30 株式会社理光 Method and apparatus for image processing and on-vehicle camera apparatus
CN101840576A (en) * 2010-05-12 2010-09-22 浙江大学 Method for visually testing resolution of each imaging area of digital camera
CN103686104A (en) * 2012-09-26 2014-03-26 株式会社日立制作所 Image processing apparatus
CN106023193A (en) * 2016-05-18 2016-10-12 东南大学 Array camera observation method for detecting structure surface in turbid media
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography;Jing Wang 等;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20061031;第25卷(第10期);1272-1283 *
数字化图像清晰度处理原理和校正调节方法;宋月红;《印刷技术》;20030831;25、27、30-31 *
数字电视图像清晰度;李桂苓 等;《电视技术》;20051217(第12期);80-83 *

Also Published As

Publication number Publication date
CN108154534A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN107093418B (en) Screen display method, computer equipment and storage medium
CN108024065B (en) Terminal shooting method, terminal and computer readable storage medium
CN109144441B (en) Screen adjusting method, terminal and computer readable storage medium
CN108038834B (en) Method, terminal and computer readable storage medium for reducing noise
CN108459799B (en) Picture processing method, mobile terminal and computer readable storage medium
CN108198150B (en) Method for eliminating image dead pixel, terminal and storage medium
CN107705247B (en) Image saturation adjusting method, terminal and storage medium
CN110189368B (en) Image registration method, mobile terminal and computer readable storage medium
CN107230065B (en) Two-dimensional code display method and device and computer readable storage medium
CN110069122B (en) Screen control method, terminal and computer readable storage medium
CN112188058A (en) Video shooting method, mobile terminal and computer storage medium
CN109710159B (en) Flexible screen response method and device and computer readable storage medium
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium
CN107743199B (en) Image processing method, mobile terminal and computer readable storage medium
CN113393398A (en) Image noise reduction processing method and device and computer readable storage medium
CN113301251A (en) Auxiliary shooting method, mobile terminal and computer-readable storage medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN112153305A (en) Camera starting method, mobile terminal and computer storage medium
CN108154534B (en) Picture definition calibration method, terminal and computer readable storage medium
CN107743204B (en) Exposure processing method, terminal, and computer-readable storage medium
CN108183833B (en) Response processing method and device and computer readable storage medium
CN107844353B (en) Display method, terminal and computer readable storage medium
CN113222850A (en) Image processing method, device and computer readable storage medium
CN108259765B (en) Shooting method, terminal and computer readable storage medium
CN107742279B (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant