CN114630049A - Image compensation method, mobile terminal and image compensation device - Google Patents

Image compensation method, mobile terminal and image compensation device Download PDF

Info

Publication number
CN114630049A
CN114630049A CN202210267501.0A CN202210267501A CN114630049A CN 114630049 A CN114630049 A CN 114630049A CN 202210267501 A CN202210267501 A CN 202210267501A CN 114630049 A CN114630049 A CN 114630049A
Authority
CN
China
Prior art keywords
frame display
current frame
image
display data
previous frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210267501.0A
Other languages
Chinese (zh)
Inventor
袁灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan China Star Optoelectronics Technology Co Ltd
Original Assignee
Wuhan China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan China Star Optoelectronics Technology Co Ltd filed Critical Wuhan China Star Optoelectronics Technology Co Ltd
Priority to CN202210267501.0A priority Critical patent/CN114630049A/en
Publication of CN114630049A publication Critical patent/CN114630049A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The invention provides an image compensation method, a mobile terminal and an image compensation device, wherein the image compensation method comprises the following steps: the method comprises the steps of obtaining current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, the previous frame display information comprises a plurality of second display data, then performing cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image, and finally performing image compensation on the current frame display image according to the target motion vector.

Description

Image compensation method, mobile terminal and image compensation device
Technical Field
The present invention generally relates to the field of communications technologies, and in particular, to an image compensation method, a mobile terminal, and an image compensation apparatus.
Background
With the continuous development of electronic technology, many application scenes in life involve operations of performing corresponding processing by using acquired images, and therefore, how to ensure the stability of the acquired images is a problem to be solved at present.
Disclosure of Invention
In order to solve the above problems or other problems, the present invention provides the following technical solutions.
In a first aspect, the present invention provides an image compensation method, including:
acquiring current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, and the previous frame display information comprises a plurality of second display data;
performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image; and the number of the first and second groups,
and performing image compensation on the current frame display image according to the target motion vector.
The image compensation method according to an embodiment of the present invention, wherein the plurality of first display data have corresponding first row vectors and first column vectors, the plurality of second display data have corresponding second row vectors and second column vectors, and the performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image specifically includes:
performing a first cross-correlation operation on the first row vector and the second row vector to obtain a plurality of first operation values, and determining a first motion vector of the current frame display image relative to the previous frame display image in a first direction according to the plurality of first operation values; and the number of the first and second groups,
and performing second cross-correlation operation on the first column vector and the second column vector to obtain a plurality of second operation values, and determining a second motion vector of the current frame display image relative to the previous frame display image in a second direction according to the plurality of second operation values.
An image compensation method according to an embodiment of the present invention, wherein the first calculated value has a first minimum value row _ offset and the second calculated value has a second minimum value col _ offset, and the current frame display picture has a preset row jitter amount msvx and a preset column jitter amount msvy with respect to the previous frame display picture, wherein:
the first motion vector is (msvx-row _ offset, 0);
the second motion vector is (0, msvy-col _ offset); and the number of the first and second groups,
the target motion vector is (msvx-row _ offset, msvy-col _ offset).
The image compensation method according to an embodiment of the present invention, wherein the plurality of first display data and the plurality of second display data each have M rows and N columns, and further includes, before the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image with respect to the previous frame display image:
adding the first display data in each column of the current frame display information respectively to obtain N first column projection values, wherein the N first column projection values form the first row vector;
adding the first display data in each row of the current frame display information respectively to obtain M first row projection values, wherein the M first row projection values form the first column vector;
adding the second display data in each column of the previous frame of display information respectively to obtain N second column projection values, wherein the N second column projection values form the second row vector;
and adding the second display data in each row of the previous frame of display information respectively to obtain M second row projection values, where the M second row projection values form the second column vector.
The image compensation method according to an embodiment of the present invention, before the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image, further includes:
extracting the data of the plurality of first display data in the current frame display information every other preset row or every other preset column to obtain compressed current frame display information; and (c) a second step of,
and performing data extraction on the plurality of second display data in the previous frame of display information every other preset row or every other preset column to obtain the compressed previous frame of display information.
The image compensation method according to an embodiment of the present invention, wherein the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image specifically includes:
and performing cross-correlation operation on the compressed current frame display information and the compressed previous frame display information to determine a target motion vector of the current frame display image relative to the previous frame display image.
The image compensation method according to an embodiment of the present invention, wherein the step of performing image compensation on the current frame display image according to the target motion vector specifically includes:
and performing image compensation on the current frame display image according to the target motion vector, the preset row and the preset column.
According to an embodiment of the invention, the first display data and the second display data are gray scale data.
In a second aspect, the present invention provides a mobile terminal comprising a processor configured to perform the image compensation method of any one of the above.
In a third aspect, the present invention provides an image compensation apparatus comprising:
an obtaining module, configured to obtain current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image, respectively, where the current frame display information includes a plurality of first display data, and the previous frame display information includes a plurality of second display data;
an operation module configured to perform a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image; and the number of the first and second groups,
a compensation module configured to perform image compensation on the current frame display image according to the target motion vector.
The invention has the beneficial effects that: the invention provides an image compensation method, a mobile terminal and an image compensation device, wherein the image compensation method comprises the following steps: the method comprises the steps of obtaining current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, the previous frame display information comprises a plurality of second display data, then performing cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image, and finally performing image compensation on the current frame display image according to the target motion vector.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments according to the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive effort.
Fig. 1 is a flowchart illustrating an image compensation method according to a first embodiment of the present invention.
Fig. 2 is a schematic flow chart of an image compensation method according to a first embodiment of the present invention.
Fig. 3a to 3d are schematic views of application scenarios of the image compensation method according to the first embodiment of the invention.
Fig. 4 is a flowchart illustrating an image compensation method according to a second embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an image compensation apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Fig. 7 is a detailed structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. "beneath," "under" and "beneath" a first feature includes the first feature being directly beneath and obliquely beneath the second feature, or simply indicating that the first feature is at a lesser elevation than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an image compensation method according to a first embodiment of the present invention, and as shown in fig. 1, the image compensation method may specifically include the following steps:
acquisition step S101: acquiring current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, and the previous frame display information comprises a plurality of second display data;
operation step S102: performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image;
a compensation step S103: and performing image compensation on the current frame display image according to the target motion vector.
It should be noted that a vehicle Driver Monitor System (DMS) is a device that can process images, and the System can perform all-weather real-time monitoring on whether a Driver is in a fatigue state and whether dangerous driving behaviors exist or not while the Driver is driving. In order to ensure the stability of the system during the monitoring operation, it is necessary to provide the system with an anti-shake function for the image.
Anti-shake technology has first appeared in the field of photography and video, such as single lens reflex, unmanned aerial vehicles, mobile phones, and the like. Specifically, in some imaging systems, optical anti-shake is used to ensure stability of the obtained image, but the optical anti-shake requires additional sensors and actuators to ensure stability of the operation of the optical elements, which results in higher cost of the imaging system.
However, the embodiment according to the present invention utilizes an electronic anti-shake method to ensure the stability of the acquired image, specifically, the stability of the current frame display image can be ensured by calculating the target motion vector of the current frame display image relative to the previous frame display image and then performing image compensation on the current frame display image with the target motion vector.
Further, in this embodiment, the current frame display image and the previous frame display image are acquired by using a near-infrared camera, and the first display data and the second display data are gray scale data, specifically, the gray scale data may be 16 bits. It should be noted that, in another embodiment according to the present invention, the current frame display image and the previous frame display image may be obtained by using a visible light camera, in this embodiment, data in the current frame display information and the previous frame display information corresponding to the current frame display image and the previous frame display image respectively is RGB data, and in this case, the RGB data needs to be converted into gray scale data, and then the operation step S102 and the compensation step S103 described above need to be performed.
Referring to fig. 2 and fig. 3a to 3d, fig. 2 is a schematic diagram illustrating a further flow of the image compensation method according to the first embodiment of the invention, and fig. 3a to 3d are schematic diagrams illustrating an application scenario of the image compensation method according to the first embodiment of the invention.
In the embodiment according to the present invention, each of the plurality of first display data and the plurality of second display data has M rows and N columns. Specifically, in one representative example as shown in fig. 3a and 3b, M has a value of 4, N has a value of 6, that is, each of the plurality of first display data and the plurality of second display data has 4 rows and 6 columns, and, in fig. 3a and 3b, each of the plurality of first display data is represented as q (i, j) and each of the plurality of second display data is represented as p (i, j), where abs (i) ≦ M ≦ 4 and abs (j) ≦ N ≦ 6, it should be noted that the above-mentioned abs function is to represent an absolute value of an equation in parentheses. Specifically, as shown in fig. 3a and 3b, the data value of the first display data denoted as q (2,1) is 7.
Further, in this embodiment, before the above operation step S102 is executed, the first display data and the second display data need to be projected in a first direction (specifically, a row direction) and a second direction (specifically, a column direction), respectively, for example, please continue to refer to the further flowchart of the image compensation method shown in fig. 2, in this embodiment, the image compensation method further includes:
first projection substep S1041: adding first display data in each column of the current frame display information respectively to obtain N first column projection values, wherein the N first column projection values form a first row vector;
the second projection substep S1042: adding first display data in each row of the current frame display information respectively to obtain M first row projection values, wherein a first column vector is formed by the M first row projection values;
third projection substep S1043: adding second display data in each column of the previous frame of display information respectively to obtain N second column projection values, wherein the N second column projection values form a second row vector;
the fourth projection substep S1044: and respectively adding the second display data in each line of the previous frame of display information to obtain M second line projection values, wherein the M second line projection values form a second column vector.
It should be noted that, in the embodiment according to the present invention, the first line vector calculated in the first projection substep S1041 is Nk=(N1k,N2k,…,Nnk) The first column vector calculated in the second projection substep S1042 is Mk=(M1k,M2k,…,Mmk) The second row vector calculated in the third projection substep S1043 is Nk-1=(N1k-1,N2k-1,…,Nnk-1) The first column vector calculated in the fourth projection substep S1044 is Mk-1=(M1k-1,M2k-1,…,Mmk-1)。
Specifically, in one representative example as shown in fig. 3a and 3b, the calculated first row vector is Nk=(N1k=4,N2k=40,N3k=44,N4k=48,N5k=52,N6k=56)The first column vector obtained by calculation is Mk=(M1k=16,M2k=46,M3k=76,M4k106), the calculated second row vector is Nk-1=(N1k-1=40,N2k-1=44,N3k-1=48,N4k-1=52,N5k-1=56,N6k-160) and the second column vector calculated is Mk-1=(M1k-1=21,M2k-1=57,M3k-1=93,M4k-1=129)。
Further, referring to fig. 2, in the present embodiment, the operation step S102 may specifically include the following two sub-steps:
the first operation substep S1021: performing a first cross-correlation operation on the first row vector and the second row vector to obtain a plurality of first operation values, and determining a first motion vector of the current frame display image relative to the previous frame display image in a first direction according to the plurality of first operation values;
second operation substep S1022: and performing second cross-correlation operation on the first column vector and the second column vector to obtain a plurality of second operation values, and determining a second motion vector of the current frame display image relative to the previous frame display image in a second direction according to the plurality of second operation values.
Specifically, the operation formula of the first cross-correlation operation is
Figure BDA0003552406370000081
The second cross-correlation operation has the formula
Figure BDA0003552406370000082
Wherein msvx and msvy are values of a preset row shaking amount and a preset column shaking amount of the current frame display picture relative to the previous frame display picture, respectively, and a [ x [ x ] ]]For the calculated first operation value, b [ y ]]Is the calculated second operation value. For example, referring to fig. 3c, when the first operation substep S1021 is executed with msvx equal to 2 and x equal to 1, the first operation value can be expressed as:
abs(0-N1k)+abs(N1k-1-N2k)+abs(N2k-1-N3k)+…+abs(N5k-1-N6k)+abs(N6k-1-0)
specifically, the calculated first calculated value a [1] is 64, and in the process of calculating the first calculated value a [1] by taking msvx as 2 and x as 1, the value of the first line vector and the second line vector after being canceled with each other is the smallest, so in the present embodiment, the first motion vector in the first direction of the current frame display image with respect to the previous frame display image is (1,0), and this first motion vector indicates that the current frame display image needs to be shifted one unit to the left. Further, in this embodiment, if the calculated first motion vector is (-1,0), it indicates that the current frame display image needs to be shifted by one unit to the right, if the calculated second motion vector is (0,1), it indicates that the current frame display image needs to be shifted by one unit to the up, and if the calculated first motion vector is (0, -1), it indicates that the current frame display image needs to be shifted by one unit to the down.
Further, in the present embodiment, the first operation values have a first minimum value row _ offset, and the second operation values have a second minimum value col _ offset, where:
the first motion vector is (msvx-row _ offset, 0);
the second motion vector is (0, msvy-col _ offset); and the number of the first and second groups,
the target motion vector is (msvx-row _ offset, msvy-col _ offset).
Specifically, in the example shown in fig. 3a to 3d, the target motion vector is (1,0), and as shown in fig. 3d, in the process of performing image compensation on the current frame display image by using the target motion vector, a value of 0 or any other value may be filled in the data bits of the first display data that are left out.
According to a first embodiment of the present invention, there is provided an image compensation method, a mobile terminal, and an image compensation apparatus, the image compensation method including: the method comprises the steps of obtaining current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, the previous frame display information comprises a plurality of second display data, then performing cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image, and finally performing image compensation on the current frame display image according to the target motion vector.
Referring to fig. 4, fig. 4 is a schematic flow chart of an image compensation method according to a second embodiment of the present invention, and as shown in fig. 1, the image compensation method may specifically include the following steps:
an acquisition step S201: acquiring current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, and the previous frame display information comprises a plurality of second display data;
first compression step S202: extracting data of a plurality of first display data in the current frame display information every other preset row or every other preset column to obtain compressed current frame display information;
second compression step S203: extracting data of a plurality of second display data in the previous frame of display information every other preset row or every other preset column to obtain the compressed previous frame of display information;
operation step S204: performing cross-correlation operation on the compressed current frame display information and the compressed previous frame display information to determine a target motion vector of the current frame display image relative to the previous frame display image;
compensation step S205: and performing image compensation on the current frame display image according to the target motion vector, the preset row and the preset column.
It should be noted that, different from the first embodiment described above, in this embodiment, before performing the cross-correlation operation on the plurality of first display data and the plurality of second display data, the same data compression is performed on the plurality of first display data and the plurality of second display data, respectively, so that the burden of the subsequent operation step S204 can be reduced.
Specifically, in addition to the data compression methods listed in the first compression step S202 and the second compression step S203, other data compression methods may be adopted, for example, a plurality of first display data and a plurality of second display data with the same number in the current frame display information and the previous frame display information are respectively obtained by a template of "preset row by preset column", then the average values are respectively obtained, and then the operation step S204 is performed.
It should be noted that, since data compression is performed before the operation step S204 is performed, in the compensation step S205, the target motion vector needs to be matched with the values of the preset row and the preset column in the first compression step S202 and the second compression step S203. For example, if the plurality of first display data in the current frame display information and the plurality of second display data in the previous frame display information are extracted for each 1 line or 1 column in the first compression step S202 and the second compression step S203, and the target motion vector calculated in the operation step S204 is (1,1), then the current frame display image needs to be image-compensated for (2,2) in the compensation step S205.
Specifically, the image compensation method provided by the embodiment of the invention can be applied to applications or fields with fixed scenes, such as vehicle-mounted 360-degree holographic images, vehicle-mounted reversing images, vehicle-mounted streaming media rearview mirrors, vehicle-mounted driving recorders, security monitoring and the like.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image compensation apparatus 200 according to an embodiment of the present invention, in which components and relative positions of the components can be visually seen.
As shown in fig. 5, the image compensation apparatus 200 includes an obtaining module 210, an operation module 220, and a compensation module 230, wherein:
the obtaining module 210 is configured to obtain current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image, respectively, where the current frame display information includes a plurality of first display data, and the previous frame display information includes a plurality of second display data;
the operation module 220 is configured to perform a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image;
the compensation module 230 is configured to perform image compensation on the current frame display image according to the target motion vector.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention, where the mobile terminal may be a smart phone or a tablet computer, and components and relative positions of the components of the present invention can be visually seen from the diagram.
As shown in fig. 6, the mobile terminal 100 includes a processor 101, a memory 102. The processor 101 is electrically connected to the memory 102, and the processor 101 is configured to execute the image compensation method described above.
The processor 101 is a control center of the mobile terminal 100, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or loading an application program stored in the memory 102 and calling data stored in the memory 102, thereby performing overall monitoring of the mobile terminal.
Referring to fig. 7, fig. 7 is a detailed structural schematic diagram of a mobile terminal according to an embodiment of the present invention, where the mobile terminal may be a smart phone or a tablet computer, and components of the present invention and relative position relationships of the components can be visually seen from the diagram.
Fig. 7 shows a specific block diagram of the mobile terminal 100 according to an embodiment of the present invention. As shown in fig. 7, the mobile terminal 100 may include Radio Frequency (RF) circuitry 110, memory 120 including one or more computer-readable storage media, an input unit 130, a display unit 140, a sensor 150, audio circuitry 160, a transmission module 170 (e.g., Wireless Fidelity (WiFi), a Wireless Fidelity (wi-fi)), a processor 180 including one or more processing cores, and a power supply 190. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 7 is not intended to be limiting of mobile terminals and may include more or fewer components than those illustrated, or a combination of certain components, or a different arrangement of components.
The RF circuit 110 is used for receiving and transmitting electromagnetic waves, and performs interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF circuitry 110 may include various existing circuit components for performing these functions, such as antennas, radio frequency transceivers, digital signal processors, encryption/decryption chips, Subscriber Identity Module (SIM) cards, memory, and so forth. The RF circuitry 110 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE802.11 a, IEEE802.11 b, IEEE802.11g, and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide mail Access (Microwave Access for micro), wimax-1, other suitable short message protocols, and any other suitable Protocol for instant messaging, and may even include those protocols that have not yet been developed.
The memory 120 may be configured to store software programs and modules, such as corresponding program instructions in the above audio power amplifier control method, and the processor 180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 120, that is, obtains the frequency of the information transmission signal transmitted by the mobile terminal 100. Generating interference signals, and the like. Memory 120 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 120 may further include memory located remotely from the processor 180, which may be connected to the mobile terminal 100 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and receives and executes commands sent from the processor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 may also include other input devices 132. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by or provided to a user and various graphic user interfaces of the mobile terminal 100, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in the figures touch-sensitive surface 131 and display panel 141 are shown as two separate components to implement input and output functions, in some embodiments touch-sensitive surface 131 may be integrated with display panel 141 to implement input and output functions.
The mobile terminal 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may generate an interrupt when the folder is closed or closed. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor that may be further configured for the mobile terminal 100, further description is omitted here.
Audio circuitry 160, speaker 161, and microphone 162 may provide an audio interface between a user and mobile terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the processor 180 for processing, and then to the RF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 120 for further processing. The audio circuit 160 may also include an earbud jack to provide communication of a peripheral headset with the mobile terminal 100.
The mobile terminal 100, which can assist the user in receiving requests, transmitting information, etc., through the transmission module 170 (e.g., Wi-Fi module), provides the user with wireless broadband internet access. Although the transmission module 170 is shown in the drawings, it is understood that it does not belong to the essential constitution of the mobile terminal 100 and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is a control center of the mobile terminal 100, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby integrally monitoring the mobile terminal. Optionally, processor 180 may include one or more processing cores; in some embodiments, the processor 180 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The mobile terminal 100 may also include a power supply 190 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 180 via a power management system that may be used to manage charging, discharging, and power consumption management functions in some embodiments. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mobile terminal 100 further includes a camera (e.g., a front camera, a rear camera, etc.), a bluetooth module, a flashlight, etc., which will not be described herein. Specifically, in the present embodiment, the display unit of the mobile terminal 100 is a touch screen display.
In addition to the above embodiments, the present invention may have other embodiments. All technical solutions formed by using equivalents or equivalent substitutions fall within the protection scope of the claims of the present invention.
In summary, although the preferred embodiments of the present invention have been described above, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.

Claims (10)

1. An image compensation method, characterized in that the image compensation method comprises:
acquiring current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image respectively, wherein the current frame display information comprises a plurality of first display data, and the previous frame display information comprises a plurality of second display data;
performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image; and the number of the first and second groups,
and performing image compensation on the current frame display image according to the target motion vector.
2. The image compensation method according to claim 1, wherein the plurality of first display data have corresponding first row vectors and first column vectors, the plurality of second display data have corresponding second row vectors and second column vectors, and the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine the target motion vector of the current frame display image relative to the previous frame display image specifically comprises:
performing a first cross-correlation operation on the first row vector and the second row vector to obtain a plurality of first operation values, and determining a first motion vector of the current frame display image relative to the previous frame display image in a first direction according to the plurality of first operation values; and the number of the first and second groups,
and performing second cross-correlation operation on the first column vector and the second column vector to obtain a plurality of second operation values, and determining a second motion vector of the current frame display image relative to the previous frame display image in a second direction according to the plurality of second operation values.
3. The picture compensation method according to claim 2, wherein there is a first minimum value row _ offset among a plurality of the first operation values and a second minimum value col _ offset among a plurality of the second operation values, and the current frame display picture has a preset row jitter amount msvx and a preset column jitter amount msvy with respect to the previous frame display picture, wherein:
the first motion vector is (msvx-row _ offset, 0);
the second motion vector is (0, msvy-col _ offset); and the number of the first and second groups,
the target motion vector is (msvx-row _ offset, msvy-col _ offset).
4. The image compensation method according to claim 2, wherein the plurality of first display data and the plurality of second display data each have M rows and N columns, and further comprising, before the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine the target motion vector of the current frame display image relative to the previous frame display image:
adding the first display data in each column of the current frame display information respectively to obtain N first column projection values, wherein the N first column projection values form the first row vector;
adding the first display data in each row of the current frame display information respectively to obtain M first row projection values, wherein the M first row projection values form the first column vector;
adding the second display data in each column of the previous frame of display information respectively to obtain N second column projection values, wherein the N second column projection values form the second row vector;
and adding the second display data in each row of the previous frame of display information respectively to obtain M second row projection values, wherein the M second row projection values form the second column vector.
5. The image compensation method according to claim 1, wherein before the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image, the method further comprises:
extracting the data of the plurality of first display data in the current frame display information every other preset row or every other preset column to obtain compressed current frame display information; and the number of the first and second groups,
and performing data extraction on the plurality of second display data in the previous frame of display information every other preset row or every other preset column to obtain the compressed previous frame of display information.
6. The image compensation method according to claim 5, wherein the step of performing a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image specifically comprises:
and performing cross-correlation operation on the compressed current frame display information and the compressed previous frame display information to determine a target motion vector of the current frame display image relative to the previous frame display image.
7. The image compensation method according to claim 6, wherein the step of performing image compensation on the current frame display image according to the target motion vector specifically comprises:
and performing image compensation on the current frame display image according to the target motion vector, the preset row and the preset column.
8. The image compensation method according to claim 1, wherein the first display data and the second display data are grayscale data.
9. A mobile terminal, characterized in that it comprises a processor configured to perform the image compensation method according to any one of claims 1-8.
10. An image compensation apparatus, characterized in that the image compensation apparatus comprises:
an obtaining module, configured to obtain current frame display information and previous frame display information corresponding to a current frame display image and a previous frame display image, respectively, where the current frame display information includes a plurality of first display data, and the previous frame display information includes a plurality of second display data;
an operation module configured to perform a cross-correlation operation on the plurality of first display data and the plurality of second display data to determine a target motion vector of the current frame display image relative to the previous frame display image; and the number of the first and second groups,
a compensation module configured to perform image compensation on the current frame display image according to the target motion vector.
CN202210267501.0A 2022-03-17 2022-03-17 Image compensation method, mobile terminal and image compensation device Pending CN114630049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210267501.0A CN114630049A (en) 2022-03-17 2022-03-17 Image compensation method, mobile terminal and image compensation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210267501.0A CN114630049A (en) 2022-03-17 2022-03-17 Image compensation method, mobile terminal and image compensation device

Publications (1)

Publication Number Publication Date
CN114630049A true CN114630049A (en) 2022-06-14

Family

ID=81901334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210267501.0A Pending CN114630049A (en) 2022-03-17 2022-03-17 Image compensation method, mobile terminal and image compensation device

Country Status (1)

Country Link
CN (1) CN114630049A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070248167A1 (en) * 2006-02-27 2007-10-25 Jun-Hyun Park Image stabilizer, system having the same and method of stabilizing an image
CN101692692A (en) * 2009-11-02 2010-04-07 彭健 Method and system for electronic image stabilization
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070248167A1 (en) * 2006-02-27 2007-10-25 Jun-Hyun Park Image stabilizer, system having the same and method of stabilizing an image
CN101692692A (en) * 2009-11-02 2010-04-07 彭健 Method and system for electronic image stabilization
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周亚军;王翔;苏享;姚志龙;姚春柱;: "《快速灰度投影稳像算法研究》", 《兵工自动化》, vol. 28, no. 8, pages 88 - 93 *
李大成;杨晓东: "《一种基于灰度投影差值的稳像算法》", 《舰船电子工程》, vol. 37, no. 1, pages 82 - 84 *

Similar Documents

Publication Publication Date Title
CN109348125B (en) Video correction method, video correction device, electronic equipment and computer-readable storage medium
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN108605096B (en) Electronic equipment shooting method and device
CN112308806B (en) Image processing method, device, electronic equipment and readable storage medium
CN108038825B (en) Image processing method and mobile terminal
US20160112701A1 (en) Video processing method, device and system
CN107749046B (en) Image processing method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
JP6862564B2 (en) Methods, devices and non-volatile computer-readable media for image composition
CN112532958B (en) Image processing method, device, electronic equipment and readable storage medium
CN109005355B (en) Shooting method and mobile terminal
US20200090309A1 (en) Method and device for denoising processing, storage medium, and terminal
CN109462732B (en) Image processing method, device and computer readable storage medium
CN109348212B (en) Image noise determination method and terminal equipment
CN111028192B (en) Image synthesis method and electronic equipment
CN110769162B (en) Electronic equipment and focusing method
CN112330564A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114384465A (en) Azimuth angle determination method and device
CN108965701B (en) Jitter correction method and terminal equipment
CN111355892B (en) Picture shooting method and device, storage medium and electronic terminal
CN110933305B (en) Electronic equipment and focusing method
CN116363174A (en) Parameter calibration method, storage medium, co-processing chip and electronic equipment
CN114630049A (en) Image compensation method, mobile terminal and image compensation device
CN109729264B (en) Image acquisition method and mobile terminal
CN111723615B (en) Method and device for judging matching of detected objects in detected object image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination