CN111553846B - Super-resolution processing method and device - Google Patents

Super-resolution processing method and device Download PDF

Info

Publication number
CN111553846B
CN111553846B CN202010400669.5A CN202010400669A CN111553846B CN 111553846 B CN111553846 B CN 111553846B CN 202010400669 A CN202010400669 A CN 202010400669A CN 111553846 B CN111553846 B CN 111553846B
Authority
CN
China
Prior art keywords
area
super
resolution processing
determining
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010400669.5A
Other languages
Chinese (zh)
Other versions
CN111553846A (en
Inventor
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010400669.5A priority Critical patent/CN111553846B/en
Publication of CN111553846A publication Critical patent/CN111553846A/en
Application granted granted Critical
Publication of CN111553846B publication Critical patent/CN111553846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a super-resolution processing method and device, which are applied to electronic equipment, wherein the method comprises the following steps: determining at least one gaze point of a user on a target frame image by an eye tracking technique; determining a first target area on the target frame image according to the at least one gaze point; image segmentation is carried out on the target frame image through the at least one fixation point and the first image characteristic information in the first target area, so that a plurality of area blocks are obtained; and performing super-resolution processing on the plurality of region blocks, and outputting the processed target frame image. The method and the device are beneficial to improving the processing efficiency of the super resolution and improving the super resolution processing effect of the important attention area of the user.

Description

Super-resolution processing method and device
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for processing super resolution.
Background
At present, the principle of the commonly used super-resolution processing technology is to recover a high-resolution image by using a plurality of low-resolution images or image sequences, the super-resolution processing technology is divided into two types of super-resolution restoration and super-resolution reconstruction, and on the super-resolution processing effect, although some means are used for controlling the super-resolution processing effect of the image, for example, the super-resolution processing based on deep learning is to control the processing effect of the super-resolution based on the number of convolutions and the size of convolution kernels, but the control is still performed based on the whole image area, the system occupation is larger, and if the super-resolution processing technology is applied to the video field to process each frame of image, the super-resolution processing technology based on the whole image reduces the processing effect of the super-resolution in order to achieve the real-time performance of the video and the balance of the system occupation.
Disclosure of Invention
The embodiment of the application provides a super-resolution processing method and device, which aim to improve the processing efficiency of super-resolution and the super-resolution processing effect of a focus area of a user.
In a first aspect, an embodiment of the present application provides a super-resolution processing method, which is applied to an electronic device, where the method includes:
determining at least one gaze point of a user on a target frame image by an eye tracking technique;
determining a first target area on the target frame image according to the at least one gaze point;
image segmentation is carried out on the target frame image through the at least one fixation point and the first image characteristic information in the first target area, so that a plurality of area blocks are obtained;
and performing super-resolution processing on the plurality of region blocks, and outputting the processed target frame image.
In a second aspect, an embodiment of the present application provides a super-resolution processing apparatus, which is applied to an electronic device, and includes a determining unit, a dividing unit, and a processing unit, where:
the determining unit is used for determining at least one fixation point of a user on the target frame image through an eyeball tracking technology; and determining a first target area on the target frame image according to the at least one gaze point;
The segmentation unit is used for carrying out image segmentation on the target frame image through the at least one fixation point and the first image characteristic information in the first target area to obtain a plurality of area blocks;
the processing unit is used for performing super-resolution processing on the plurality of area blocks and outputting processed target frame images.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing steps in any of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in any of the methods of the first aspect of embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in this embodiment of the present application, the electronic device determines, by using an eye tracking technology, at least one gaze point on a target frame image, and determines a first target area on the target frame image according to the at least one gaze point, then, image segments the target frame image through the at least one gaze point and first image feature information in the first target area, so as to obtain a plurality of area blocks, and then, performs super-resolution processing on the plurality of area blocks, and outputs the processed target frame image. Therefore, the electronic device determines the first target area focused by the user through the eyeball tracking technology, and performs segmentation processing on the target frame image according to the first target area so as to highlight the super-resolution effect of the first target area on the target frame image, and meanwhile, the super-resolution processing on other areas can be weakened, so that the super-resolution processing efficiency of the target frame image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software structure of an electronic device according to an embodiment of the present application;
fig. 3A is a schematic flow chart of a super-resolution processing method according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a region segmentation provided in an embodiment of the present application;
fig. 3C is a schematic diagram of a multi-region super-resolution processing effect according to an embodiment of the present application;
FIG. 3D is a schematic diagram of a first target area according to an embodiment of the present application;
FIG. 3E is a schematic illustration of yet another first target area provided by an embodiment of the present application;
FIG. 3F is a schematic view of yet another first target area provided by an embodiment of the present application;
fig. 4 is a flow chart of another super-resolution processing method according to an embodiment of the present application;
fig. 5 is a flow chart of another super-resolution processing method according to an embodiment of the present application;
FIG. 6 is a block diagram of a distributed functional unit of a super-resolution processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an integrated functional unit of a super-resolution processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
For a better understanding of aspects of embodiments of the present application, related terms and concepts that may be related to embodiments of the present application are described below.
1) The electronic device may be a portable electronic device that also contains other functions such as personal digital assistant and/or music player functions, such as a cell phone, tablet computer, wearable electronic device with wireless communication capabilities (e.g., a smart watch), etc. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that are equipped with IOS systems, android systems, microsoft systems, or other operating systems. The portable electronic device may also be other portable electronic devices such as a Laptop computer (Laptop) or the like. It should also be appreciated that in other embodiments, the electronic device described above may not be a portable electronic device, but rather a desktop computer.
2) Eye tracking, also known as eye tracking, eye tracking/tracking, gaze point tracking/tracking, etc., refers to a mechanism that determines a user's gaze direction and gaze point based on fused image acquisition, gaze estimation techniques.
3) The gaze point refers to the point of fall of the eye's line of sight on the plane of the screen.
4) The super resolution processing, that is, the resolution of the original image is improved by a hardware or software method, and a high resolution image process is obtained by a series of low resolution images.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. Electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a compass 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identity module (subscriber identification module, SIM) card interface 195, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the electronic device 100 may also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the electronic device 100 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include inter-integrated circuit (inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 130 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. The USB interface 130 may also be used to connect headphones through which audio is played.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a mini light-emitting diode (mini light-emitting diode), microLed, micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or more display screens 194.
The electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature, etc. of the photographed scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or more cameras 193.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic device 100 to execute the method of displaying page elements provided in some embodiments of the present application, as well as various applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data created during use of the electronic device 100 (e.g., photos, contacts, etc.), and so on. In addition, the internal memory 121 may include high-speed random access memory, and may also include nonvolatile memory, such as one or more disk storage units, flash memory units, universal flash memory (universal flash storage, UFS), and the like. In some embodiments, processor 110 may cause electronic device 100 to perform the methods of displaying page elements provided in embodiments of the present application, as well as other applications and data processing, by executing instructions stored in internal memory 121, and/or instructions stored in a memory provided in processor 110. The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., X, Y and Z axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The embodiments of the present application are described in detail below.
Referring to fig. 3A, fig. 3A is a flowchart of a super-resolution processing method, which is applied to an electronic device and includes the following operations, according to an embodiment of the present invention.
S301, the electronic device determines at least one gaze point of a user on a target frame image through an eyeball tracking technology;
the at least one gaze point may be a point where a user gaze time period determined according to the eye tracking technology is greater than a preset time period threshold, and the preset time period threshold may be, for example, 5s, 8s, etc., which is not limited herein.
S302, the electronic equipment determines a first target area on the target frame image according to the at least one gaze point;
the first target area may be various, for example, a circular area, a rectangular area, a triangular area, a humanoid area, etc., which is not limited herein.
The specific implementation manner of determining the first target area on the target frame image by the electronic device according to the at least one gaze point may be various, for example, a closed area formed by connecting the electronic device with the at least one gaze point may be the first target area, or an area with a peripheral preset size may be the first target area with a gaze point located at a central position in the at least one gaze point as a center, or a closed area formed by connecting with the at least one gaze point may be a central area, an area with a peripheral preset area is the first target area, or object information of a user gaze may be determined according to the at least one gaze point, an area where an object corresponding to the object information is located may be the first target area, and the like, which is not limited herein.
S303, the electronic equipment performs image segmentation on the target frame image through the at least one gaze point and first image characteristic information in the first target region to obtain a plurality of region blocks;
the image segmentation may be to segment the foreground and the background in the image, or segment the area occupied by different objects in the image, and the like, and the plurality of area blocks are, for example, as shown in fig. 3B.
The specific implementation manner of the electronic device for performing image segmentation on the target frame image through the at least one point of regard and the first image feature information in the first target area may be various, for example, the method may include determining the information of the object to be watched according to the at least one point of regard, then performing image segmentation on the information of the object to be watched and other objects according to the first image feature information, or may include determining the first plane according to the at least one point of regard, then performing segmentation on the image, which is not in the first plane, in the first image feature information and the image in the first plane, and the like, which is not limited herein.
For example, the first target area includes a landscape and a person a, the electronic device determines that the corresponding gazed object information is the person a according to the at least one gazing point, and then the electronic device performs image segmentation on the landscape feature in the background and the person a to segment the first target area into different area blocks.
And S304, the electronic equipment performs super-resolution processing on the plurality of region blocks and outputs the processed target frame image.
As shown in fig. 3C, the resolutions corresponding to the plurality of region blocks on the processed target frame image are different, and a specific implementation manner of performing super-resolution processing on the plurality of region blocks by the electronic device may be to process the region block 1 into a first resolution, process the region 2 into a second resolution, process the region 3 into a third resolution, where the first resolution, the second resolution and the third resolution are different.
Specifically, when the area 1 is an area where an object watched by a user is located and the area 2 is a background area in a first target area, setting that the first resolution is higher than the second resolution, the electronic device can reconstruct the area 1 through 5 low-resolution images corresponding to the area 1 to obtain an area 1 after super-resolution processing, and reconstruct the area 2 through 3 low-resolution images corresponding to the area 2 to obtain an area 2 after super-resolution processing, wherein exposure values or exposure times corresponding to different low-resolution images are different, or the used diaphragms are different; alternatively, the resolution may be increased by interpolation, with region block 1 being increased to a first resolution and region 2 being increased to a second resolution.
It can be seen that, in this embodiment of the present application, an electronic device determines, through an eye tracking technology, at least one gaze point of a user on a target frame image, and determines a first target area on the target frame image according to the at least one gaze point, then, image segmentation is performed on the target frame image through the at least one gaze point and first image feature information in the first target area, so as to obtain a plurality of area blocks, then, super-resolution processing is performed on the plurality of area blocks, and a processed target frame image is output. Therefore, the electronic device determines the first target area focused by the user through the eyeball tracking technology, and performs segmentation processing on the target frame image according to the first target area so as to highlight the super-resolution effect of the first target area on the target frame image, and meanwhile, the super-resolution processing on other areas can be weakened, so that the super-resolution processing efficiency of the target frame image is improved.
In one possible example, the at least one gaze point comprises at least two gaze points, and the determining the first target area on the target frame image from the at least one gaze point comprises:
connecting every two points in the at least one fixation point to obtain at least one straight line;
and determining the first target area on the target frame image according to the area formed by the at least one straight line.
The specific implementation manner of determining the first target area on the target frame image according to the area formed by the at least one straight line may be various, for example, as shown in fig. 3D, the first target area may be a maximum area formed by the at least one straight line, or as shown in fig. 3E, the first target area may be an area formed by the at least one straight line; or as shown in fig. 3F, the electronic device determines the corresponding object information according to the area formed by at least one straight line, and then determines that the first target area is the area where the object information is located, such as the face area shown in fig. 3E, which is not limited herein.
In this example, the electronic device determines the first target area according to the area formed by the connection line between the at least one gaze point, instead of determining a rule area as the first target area according to the at least one gaze point, which is beneficial to improving accuracy of determining the first target area and meets requirements of users.
In one possible example, the determining the first target area on the target frame image according to the at least one gaze point includes:
acquiring second image characteristic information of a second target area in the history frame image;
determining a first reference area on the target frame image according to the second image characteristic information;
judging whether the at least one gaze point is within the first reference region;
and when the at least one fixation point is judged to be all in the first reference area, determining the first reference area as the first target area.
The target frame image is any frame image in a video shot by a user in the process of the electronic equipment, and the historical frame image is a frame image before the target frame image in the video.
The specific implementation manner of determining the first reference area on the target frame image according to the second image feature information may be determining third image feature information on the target frame image, where the similarity between the third image feature information and the second image feature information is greater than a preset similarity threshold, and then determining that an area corresponding to the third image feature information is the first reference area, for example, the second image feature information is face feature information, and then determining that an area corresponding to the same face feature information on the target frame image is the first reference area.
In this example, the electronic device determines the first target area of the target frame image according to the determined second target area on the history frame image, which is beneficial to reducing complexity of super-resolution processing, improving intelligence of super-resolution processing, and reducing system consumption.
In this possible example, after the determining whether the at least one gaze point is within the first reference region, the method further comprises:
when the target fixation point which is not included in the first reference area is judged to be included in the at least one fixation point, determining a second reference area according to the target fixation point;
the first reference area and the second reference area are used as the first target area.
For example, the first reference area is a face area, the at least one gaze point includes a target gaze point for a body part below the face area, and the second reference area is determined to be an upper body area of the human body according to the target gaze point, so that the first target area is a set of the face area and the upper body area.
In this example, when it is determined that the at least one gaze point includes a target gaze point that is not in the first reference area, the electronic device uses the second reference area and the first reference area determined by the target gaze point as the first target area, which is favorable for improving accuracy of the first target area.
In one possible example, the image segmentation of the target frame image by the at least one gaze point and the first image feature information in the first target region, to obtain a plurality of region blocks, includes:
determining corresponding object information according to the at least one gaze point;
determining a reference plane in which the object information is located;
acquiring first relative distance information between information except the object information in the first image characteristic information and the reference plane;
and dividing the first target area according to the first relative distance information, and dividing the first target area and the target frame image to obtain a plurality of area blocks.
The specific implementation manner of determining the corresponding object information according to the at least one gaze point may be various, for example, determining the reference object information corresponding to each gaze point, and then determining the object information with the highest occurrence probability in the multiple reference object information as the object information corresponding to the at least one gaze point, or selecting multiple gaze points in a central area in the at least one gaze point, and determining the object information corresponding to the selected gaze point as the object information corresponding to the at least one gaze point.
For example, the object information is face information, it is determined that the face information is in a first reference plane, and it is determined that the face information includes mountain information in a background in addition to the face information according to the first image feature information, and a specific manner of dividing the first target area according to the first relative distance information may be, for example, dividing the mountain in the face information and the background information to obtain an area 1 and an area 2 as shown in fig. 3C, and dividing the first target area and the target frame image to obtain an area 3.
In this example, according to the object information corresponding to at least one gaze point, the electronic device segments the gazed object and the non-gazed object in the first target area according to the plane distance, so as to further improve the accuracy of planning the first target area.
In one possible example, the performing super-resolution processing on the plurality of region blocks, outputting a processed target frame image, includes:
determining super-resolution processing levels of the plurality of region blocks according to the position information of the at least one gaze point;
determining a super-resolution processing parameter corresponding to each region block according to the super-resolution processing grade;
And performing super-resolution processing on each region block through the super-resolution processing parameters, and outputting the processed target frame image.
The super-resolution processing level of the first target area is greater than that of other areas in the target frame image, the determined super-resolution processing level of the area with more points of gaze in the first target area is higher, different super-resolution processing levels correspond to different super-resolution processing parameters, the resolution of a processing result corresponding to the area with higher super-resolution processing level is higher, for example, the processing level of a face area A watched by a user is higher than that of a background area B, and then the resolution of the area A in the target frame image after super-resolution processing is higher than that of the area B.
The specific implementation manner of determining the super-resolution processing parameter corresponding to each region block by the electronic device according to the super-resolution processing level may be that the electronic device includes a mapping relationship between the super-resolution processing level and the super-resolution processing parameter, and the electronic device may query the mapping relationship according to the super-resolution processing level to determine the corresponding super-resolution processing parameter.
In this example, the electronic device divides different area blocks in the first target area into different super-resolution processing levels, and performs super-resolution processing on the plurality of area blocks according to the different super-resolution processing levels, which is beneficial to improving the super-resolution processing effect of the area focused by the user in guaranteeing the super-resolution processing timeliness of the video.
In this possible example, the determining the super resolution processing parameter corresponding to each of the region blocks according to the super resolution processing level includes:
determining a first super-resolution processing parameter corresponding to the first region block according to the first super-resolution processing level, and determining a second super-resolution processing parameter corresponding to the second region block according to the second super-resolution processing level;
and when a shared area exists between the first area block and the second area block, determining a third super-resolution processing parameter corresponding to the shared area, wherein the size of the third super-resolution processing parameter is between the first super-resolution processing parameter and the second super-resolution processing parameter.
For example, the common area between the first area and the second area may be a junction between the area 1 and the area 2 as shown in fig. 3B, and the third resolution is set at the junction, so as to achieve gradual transition between the first area and the second area.
In this example, the electronic device sets the third super-resolution processing parameter of the overlapping region between the different region blocks to be a region between the super-resolution processing parameters of the two region blocks, so that the resolution between the region blocks can realize a smooth transition, which is beneficial to improving the visual effect after the image processing.
Referring to fig. 4, fig. 4 is a flowchart illustrating another super-resolution processing method according to an embodiment of the present application, where the super-resolution processing method may be applied to an electronic device. As shown in the figure, the super-resolution processing method includes the following operations:
s401, the electronic device determines, by eye tracking technology, at least one gaze point of the user on the target frame image.
And S402, connecting every two points in the at least one fixation point by the electronic equipment to obtain at least one straight line.
S403, the electronic equipment determines a first target area on the target frame image according to the area formed by the at least one straight line.
And S404, the electronic equipment determines corresponding object information according to the at least one gaze point.
S405, the electronic equipment determines a reference plane where the object information is located.
S406, the electronic equipment acquires first relative distance information between information except the object information and the reference plane in the first image characteristic information of the first target area.
And S407, the electronic equipment divides the first target area according to the first relative distance information to obtain a plurality of area blocks.
And S408, the electronic equipment performs super-resolution processing on the plurality of region blocks and outputs the processed target frame image.
It can be seen that, in this embodiment of the present application, an electronic device determines, through an eye tracking technology, at least one gaze point of a user on a target frame image, and determines a first target area on the target frame image according to the at least one gaze point, then, image segmentation is performed on the target frame image through the at least one gaze point and first image feature information in the first target area, so as to obtain a plurality of area blocks, then, super-resolution processing is performed on the plurality of area blocks, and a processed target frame image is output. Therefore, the electronic device determines the first target area focused by the user through the eyeball tracking technology, and performs segmentation processing on the target frame image according to the first target area so as to highlight the super-resolution effect of the first target area on the target frame image, and meanwhile, the super-resolution processing on other areas can be weakened, so that the super-resolution processing efficiency of the target frame image is improved.
In addition, the electronic device determines the first target area according to the area formed by the connecting lines between the at least one fixation point, instead of determining a rule area as the first target area according to the at least one fixation point, which is beneficial to improving the accuracy of determining the first target area and meets the requirements of users.
In addition, the electronic equipment divides the gazed object and the non-gazed object in the first target area according to the object information corresponding to at least one gazing point and the plane distance, so that the accuracy of planning of the first target area is further improved.
Referring to fig. 5, fig. 5 is a flowchart of another super-resolution processing method according to an embodiment of the present application, where the super-resolution processing method may be applied to an electronic device. As shown in the figure, the super-resolution processing method includes the following operations:
s501, the electronic device determines, by eye tracking technology, at least one gaze point of the user on the target frame image.
S502, the electronic equipment acquires second image characteristic information of a second target area in the history frame image.
S503, the electronic device determines a first reference area on the target frame image according to the second image characteristic information.
S504, the electronic device determines whether the at least one gaze point is within the first reference area.
And S505, when judging that the target gaze point which is not in the first reference area is included in the at least one gaze point, the electronic equipment determines a second reference area according to the target gaze point.
S506, the electronic device takes the first reference area and the second reference area as the first target area.
S507, the electronic equipment determines corresponding object information according to the at least one gaze point.
S508, the electronic equipment determines a reference plane where the object information is located.
S509, the electronic device acquires first relative distance information between information other than the object information in the first image feature information of the first target area and the reference plane.
S510, the electronic device divides the first target area according to the first relative distance information, and divides the first target area and the target frame image to obtain a plurality of area blocks.
S511, the electronic device determines super-resolution processing levels of the plurality of area blocks according to the position information of the at least one gaze point;
S512, the electronic equipment determines the super-resolution processing parameters corresponding to each region block according to the super-resolution processing level;
and S513, the electronic equipment performs super-resolution processing on each region block through the super-resolution processing parameters, and outputs the processed target frame image.
It can be seen that, in this embodiment of the present application, an electronic device determines, through an eye tracking technology, at least one gaze point of a user on a target frame image, and determines a first target area on the target frame image according to the at least one gaze point, then, image segmentation is performed on the target frame image through the at least one gaze point and first image feature information in the first target area, so as to obtain a plurality of area blocks, then, super-resolution processing is performed on the plurality of area blocks, and a processed target frame image is output. Therefore, the electronic device determines the first target area focused by the user through the eyeball tracking technology, and performs segmentation processing on the target frame image according to the first target area so as to highlight the super-resolution effect of the first target area on the target frame image, and meanwhile, the super-resolution processing on other areas can be weakened, so that the super-resolution processing efficiency of the target frame image is improved.
In addition, when the electronic device determines that the at least one fixation point includes the target fixation point which is not in the first reference area, the second reference area and the first reference area determined through the target fixation point are taken as the first target area, which is beneficial to improving the accuracy of the first target area.
In addition, the electronic equipment divides the gazed object and the non-gazed object in the first target area according to the object information corresponding to at least one gazing point and the plane distance, so that the accuracy of planning of the first target area is further improved.
In addition, the electronic equipment divides different area blocks in the first target area into different super-resolution processing grades, and performs super-resolution processing of different degrees on the plurality of area blocks according to the different super-resolution processing grades, so that the super-resolution processing effect of the important focused areas of the user is improved on guaranteeing the super-resolution processing time of the video.
The embodiment of the application provides a super-resolution processing device, which may be the electronic apparatus 100. Specifically, the super-resolution processing device is configured to execute the steps of the above super-resolution processing method. The super-resolution processing device provided by the embodiment of the application may include modules corresponding to the respective steps.
The embodiment of the present application may divide the functional modules of the super-resolution processing apparatus according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. The division of the modules in the embodiment of the present application is schematic, which is merely a logic function division, and other division manners may be implemented in practice.
Fig. 6 shows a possible configuration diagram of the super-resolution processing apparatus involved in the above-described embodiment in the case where respective functional blocks are divided with corresponding respective functions. As shown in fig. 6, the super-resolution processing apparatus 600 includes a determination unit 601, a division unit 602, and a processing unit 603, wherein:
the determining unit 601 is configured to determine, by an eye tracking technique, at least one gaze point of a user on a target frame image; and determining a first target area on the target frame image according to the at least one gaze point;
the segmentation unit 602 is configured to perform image segmentation on the target frame image through the at least one gaze point and first image feature information in the first target region, to obtain a plurality of region blocks;
The processing unit 603 is configured to perform super-resolution processing on the plurality of region blocks, and output a processed target frame image.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. Of course, the super-resolution processing apparatus provided in the embodiments of the present application includes, but is not limited to, the above modules, for example: the super-resolution processing apparatus may further include a storage unit. The storage unit may be used for storing program codes and data of the super resolution processing apparatus.
In the case of using an integrated unit, a schematic structural diagram of the super-resolution processing apparatus provided in the embodiment of the present application is shown in fig. 7. In fig. 7, the super-resolution processing apparatus 700 includes: a processing module 702 and a communication module 701. The processing module 702 is configured to control and manage actions of the super-resolution processing apparatus, for example, perform steps performed by the determining unit 601, the dividing unit 602, and the processing unit 603, and/or perform other processes of the techniques described herein. The communication module 701 is configured to support interaction between the super resolution processing apparatus and other devices, or between internal modules of the super resolution processing apparatus. As shown in fig. 7, the super-resolution processing apparatus may further include a storage module 703, where the storage module 703 is configured to store program codes and data of the super-resolution processing apparatus, for example, the content stored in the storage unit.
The processing module 702 may be a processor or controller, such as a central processing unit (Central Processing Unit, CPU), general purpose processor, digital signal processor (Digital Signal Processor, DSP), ASIC, FPGA or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The communication module 701 may be a transceiver, a radio frequency circuit, or a communication interface, etc. The memory module 703 may be a memory.
All relevant contents of each scenario related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. Both the super-resolution processing apparatus 600 and the super-resolution processing apparatus 700 may perform the super-resolution processing method shown in any one of fig. 3A to 5.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the operating method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-described related steps to implement the super-resolution processing method in the above-described embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip executes the super-resolution processing method in each of the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A super-resolution processing method, characterized by being applied to an electronic device, the method comprising:
determining at least one gaze point of a user on a target frame image by an eye tracking technique;
determining a first target area on the target frame image according to the at least one gaze point;
image segmentation is carried out on the target frame image through the at least one fixation point and the first image characteristic information in the first target area, so that a plurality of area blocks are obtained;
performing super-resolution processing on the plurality of region blocks, and outputting a processed target frame image;
the image segmentation is performed on the target frame image through the at least one gaze point and the first image characteristic information in the first target area to obtain a plurality of area blocks, including:
determining corresponding object information according to the at least one gaze point;
determining a reference plane in which the object information is located;
acquiring first relative distance information between information except the object information in the first image characteristic information and the reference plane;
and dividing the first target area according to the first relative distance information, and dividing the first target area and the target frame image to obtain a plurality of area blocks.
2. The method according to claim 1, wherein the at least one gaze point comprises at least two gaze points, the determining a first target area on the target frame image from the at least one gaze point comprising:
connecting every two points in the at least one fixation point to obtain at least one straight line;
and determining the first target area on the target frame image according to the area formed by the at least one straight line.
3. The method according to claim 1, wherein said determining a first target area on the target frame image from the at least one gaze point comprises:
acquiring second image characteristic information of a second target area in the history frame image;
determining a first reference area on the target frame image according to the second image characteristic information;
judging whether the at least one gaze point is within the first reference region;
and when the at least one fixation point is judged to be all in the first reference area, determining the first reference area as the first target area.
4. A method according to claim 3, wherein said determining whether said at least one gaze point is within said first reference region is followed by:
When the target fixation point which is not included in the first reference area is judged to be included in the at least one fixation point, determining a second reference area according to the target fixation point;
the first reference area and the second reference area are used as the first target area.
5. The method according to any one of claims 1 to 4, wherein the performing super-resolution processing on the plurality of region blocks, outputting a processed target frame image, includes:
determining super-resolution processing levels of the plurality of region blocks according to the position information of the at least one gaze point;
determining super-resolution processing parameters corresponding to each region block according to the super-resolution processing level;
and performing super-resolution processing on each region block through the super-resolution processing parameters, and outputting the processed target frame image.
6. The method of claim 5, wherein determining the super-resolution processing parameter corresponding to each of the region blocks according to the super-resolution processing level comprises:
determining a first super-resolution processing parameter corresponding to the first region block according to the first super-resolution processing level, and determining a second super-resolution processing parameter corresponding to the second region block according to the second super-resolution processing level;
And when a shared area exists between the first area block and the second area block, determining a third super-resolution processing parameter corresponding to the shared area, wherein the size of the third super-resolution processing parameter is between the first super-resolution processing parameter and the second super-resolution processing parameter.
7. A super-resolution processing apparatus, characterized by being applied to an electronic device, comprising a determination unit, a division unit, and a processing unit, wherein:
the determining unit is used for determining at least one fixation point of a user on the target frame image through an eyeball tracking technology; and determining a first target area on the target frame image according to the at least one gaze point;
the segmentation unit is used for carrying out image segmentation on the target frame image through the at least one fixation point and the first image characteristic information in the first target area to obtain a plurality of area blocks;
the processing unit is used for performing super-resolution processing on the plurality of region blocks and outputting processed target frame images;
wherein the segmentation unit is used for: determining corresponding object information according to the at least one gaze point; determining a reference plane in which the object information is located; acquiring first relative distance information between information except the object information in the first image characteristic information and the reference plane; and dividing the first target area according to the first relative distance information, and dividing the first target area and the target frame image to obtain a plurality of area blocks.
8. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN202010400669.5A 2020-05-12 2020-05-12 Super-resolution processing method and device Active CN111553846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010400669.5A CN111553846B (en) 2020-05-12 2020-05-12 Super-resolution processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010400669.5A CN111553846B (en) 2020-05-12 2020-05-12 Super-resolution processing method and device

Publications (2)

Publication Number Publication Date
CN111553846A CN111553846A (en) 2020-08-18
CN111553846B true CN111553846B (en) 2023-05-26

Family

ID=72008067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010400669.5A Active CN111553846B (en) 2020-05-12 2020-05-12 Super-resolution processing method and device

Country Status (1)

Country Link
CN (1) CN111553846B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079728A (en) * 2020-08-19 2022-02-22 Oppo广东移动通信有限公司 Shooting anti-shake method and device, electronic equipment and storage medium
WO2022047719A1 (en) * 2020-09-04 2022-03-10 华为技术有限公司 Image resolution improvement method, video generation method, and related device
CN113018854A (en) * 2021-03-05 2021-06-25 南京雷鲨信息科技有限公司 Real-time AI (Artificial Intelligence) over-scoring method, system and computer-readable storage medium for game interested target
CN114359051A (en) * 2022-01-05 2022-04-15 京东方科技集团股份有限公司 Image processing method, image processing device, image processing system, and storage medium
CN114510192B (en) * 2022-01-30 2024-04-09 Oppo广东移动通信有限公司 Image processing method and related device
CN116843708B (en) * 2023-08-30 2023-12-12 荣耀终端有限公司 Image processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255299A (en) * 2018-01-10 2018-07-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN108885799A (en) * 2016-03-23 2018-11-23 索尼互动娱乐股份有限公司 Information processing equipment, information processing system and information processing method
CN108919958A (en) * 2018-07-16 2018-11-30 北京七鑫易维信息技术有限公司 A kind of image transfer method, device, terminal device and storage medium
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110428366A (en) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111128068A (en) * 2019-11-28 2020-05-08 上海天马有机发光显示技术有限公司 Display device and display panel driving display method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866920B2 (en) * 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885799A (en) * 2016-03-23 2018-11-23 索尼互动娱乐股份有限公司 Information processing equipment, information processing system and information processing method
CN108255299A (en) * 2018-01-10 2018-07-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN108919958A (en) * 2018-07-16 2018-11-30 北京七鑫易维信息技术有限公司 A kind of image transfer method, device, terminal device and storage medium
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110428366A (en) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111128068A (en) * 2019-11-28 2020-05-08 上海天马有机发光显示技术有限公司 Display device and display panel driving display method

Also Published As

Publication number Publication date
CN111553846A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN111553846B (en) Super-resolution processing method and device
CN111768416B (en) Photo cropping method and device
CN111738122B (en) Image processing method and related device
CN111782879B (en) Model training method and device
CN111882642B (en) Texture filling method and device for three-dimensional model
CN111555825B (en) Radio frequency resource allocation method and device
CN111400605A (en) Recommendation method and device based on eyeball tracking
CN111768352B (en) Image processing method and device
CN110830645B (en) Operation method, electronic equipment and computer storage medium
CN111612723B (en) Image restoration method and device
CN111767016B (en) Display processing method and device
CN111524528B (en) Voice awakening method and device for preventing recording detection
CN111381996B (en) Memory exception handling method and device
CN111880661A (en) Gesture recognition method and device
CN114384465A (en) Azimuth angle determination method and device
CN114244655B (en) Signal processing method and related device
CN114172596B (en) Channel noise detection method and related device
CN111581119B (en) Page recovery method and device
CN115390738A (en) Scroll screen opening and closing method and related product
CN111836226B (en) Data transmission control method, device and storage medium
CN111459271B (en) Gaze offset error determination method and device
CN114422686B (en) Parameter adjustment method and related device
CN114510192B (en) Image processing method and related device
CN114596819B (en) Brightness adjusting method and related device
CN113311380B (en) Calibration method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant