CN109559270B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN109559270B
CN109559270B CN201811315321.5A CN201811315321A CN109559270B CN 109559270 B CN109559270 B CN 109559270B CN 201811315321 A CN201811315321 A CN 201811315321A CN 109559270 B CN109559270 B CN 109559270B
Authority
CN
China
Prior art keywords
image
pixel block
pixel
writing
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811315321.5A
Other languages
Chinese (zh)
Other versions
CN109559270A (en
Inventor
陈晓晓
陈浩
熊石一
周雄
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201811315321.5A priority Critical patent/CN109559270B/en
Publication of CN109559270A publication Critical patent/CN109559270A/en
Priority to PCT/CN2019/115553 priority patent/WO2020093988A1/en
Application granted granted Critical
Publication of CN109559270B publication Critical patent/CN109559270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides an image processing method and an electronic device, wherein the method comprises the following steps: the method comprises the steps that first electronic equipment divides N frames of original images in animation into M pixel blocks in the same mode; then writing M pixel blocks of a first frame original image in the N frames of original images into a combined image; aiming at a Kth frame original image, wherein the Kth frame original image is any one of the rest N-1 frame original images, the following operations are executed: and traversing each pixel block in the K-th frame original image, writing the pixel block into the combined image when the pixel block is different from the pixel block at the same position of the previous frame, or else, not writing the first pixel block into the combined image, finally generating a configuration file, and then sending the configuration file and the combined image to the second electronic equipment so that the second electronic equipment restores the animation.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image processing method and an electronic device.
Background
In recent years, with the rapid development of the electronic industry and the communication technology, smart home devices such as mobile phones, smart speakers, smart bracelets and the like are increasing, and people's lives become more and more intelligent. Due to the portability of the mobile phone and the capability of downloading application software with various functions from an application store, the mobile phone has become an essential necessity in daily life.
With the continuous abundance of functions provided by the operating system of the mobile phone, in order to improve user experience, a guide animation is added in many scenes to assist the user in using the functions of the mobile phone. In the prior art, for a guiding animation with a complex path and rich details, using a sequence frame animation is the simplest and most direct implementation manner, but images contained in a general sequence frame animation may be from several frames to dozens of frames, and the guiding animation in an operating system of a mobile phone is hundreds of frames; finally, the number of files in the operating system rises rapidly, and the ROM space of the terminal is occupied greatly.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which are used for realizing the benefits of reducing the occupation of a Read Only Memory (ROM) and reducing the total number of files.
In a first aspect, an embodiment of the present application provides an image processing method, where the method is applied to a first electronic device, and the method includes: dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of a first frame original image in the N frames of original images into a combined image;
aiming at a Kth frame original image, wherein the Kth frame original image is any one of the rest N-1 frame original images, the following operations are executed:
and traversing a first pixel block in the K-th frame original image, writing the first pixel block into the merged image when the first pixel block in the K-th frame original image is different from the pixel block at the same position in the K-1-th frame original image, and otherwise, not writing the first pixel block into the merged image.
And when a second pixel block in the K-th frame original image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image.
Returning to execute the traversal of the third pixel block in the K frame original image until all the M pixel blocks in the K frame original image are traversed; wherein K is a positive integer greater than 1.
Generating a configuration file, the configuration file comprising: the method comprises a first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and a second corresponding relation between each pixel block in the finally generated combined image and the corresponding position information in the original image.
In the embodiment of the application, the first electronic device filters out the same pixel blocks in the previous and next frames by comparing whether the pixel blocks in the previous and next frames in the sequence frame image are the same or not, and only stores the different pixel blocks in the combined image, so that the second electronic device obtains the configuration file and the combined image from the first electronic device, and relatively speaking, the combined image and the configuration file occupy less ROM, and a certain ROM space can be saved.
In one possible design, after generating the configuration file, the first electronic device sends the configuration file and the finally generated merged image to the second electronic device.
In a second aspect, an embodiment of the present application provides an image processing method, where the method is applied to a first electronic device, and the method includes: dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of any one frame of original image in the N frames of original images into a combined image;
performing a setting operation for each of the N-1 original images, the setting operation including:
and traversing a first pixel block in a first image, writing the first pixel block into the merged image when the first pixel block in the first image is different from all the pixel blocks in the merged image, and otherwise, not writing the first pixel block into the merged image.
And when the second pixel block in the first image is different from all the pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image.
Returning to execute the third pixel block in the first image in a traversing manner until all the M pixel blocks in the first image are completely traversed;
and returning to execute a fourth pixel block in the second image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the second image is different from all pixel blocks in the current latest combined image, and otherwise, not writing the fourth pixel block into the current latest combined image. And returning to execute the fifth pixel block in the second image until all the M pixel blocks in the second image are traversed.
And returning and executing M pixel blocks in the third image in the N-1 frame original image according to the setting operation until all the images in the N-1 frame original image are traversed.
Generating a configuration file, the configuration file comprising: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image.
In the embodiment of the application, the first electronic device filters out the same pixel blocks in the previous and subsequent frames by comparing whether the pixel blocks in the sequence frame image are the same as the pixel blocks in the newly generated combined image, and only stores the different pixel blocks in the combined image, so that the second electronic device obtains the configuration file and the combined image from the first electronic device.
In a third aspect, an embodiment of the present application provides an image processing method, where the method is applied to a first electronic device, and the method includes: dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of a first image into a combined image, wherein the first image is any one of the N frames of original images;
performing a setting operation for each of the N-1 frame original images, the setting operation including:
and traversing a first pixel block in the second image, writing the first pixel block into the merged image when the first pixel block in the second image is different from all the pixel blocks in the first image, and otherwise, not writing the first pixel block into the merged image.
And when a second pixel block in the second image is different from all pixel blocks in the first image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image. And returning to execute the third pixel block in the second image in a traversing manner until all the M pixel blocks in the second image are traversed.
Returning to execute a fourth pixel block in a third image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the third image is different from all pixel blocks in the first image and all pixel blocks in the second image, otherwise, not writing the fourth pixel block into the current latest combined image; and returning to execute the fifth pixel block in the third image until all the M pixel blocks in the third image are traversed.
Returning and executing M pixel blocks in a fourth image in the N-1 frame original image according to the setting operation until all images in the N-1 frame original image are traversed;
generating a configuration file, the configuration file comprising: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image.
In the embodiment of the application, the first electronic device filters out the same pixel blocks in the previous and subsequent frames by comparing whether all the pixel blocks of the sequence frame image are the same as all the pixel blocks of the traversed image, and only stores the different pixel blocks in the combined image, so that the second electronic device obtains the configuration file and the combined image from the first electronic device, and relatively speaking, the combined image and the configuration file occupy less ROM, and a certain ROM space can be saved.
In a fourth aspect, an embodiment of the present application provides an image processing method, where the method is applied to a second electronic device, and the method includes:
obtaining a configuration file and a finally generated combined image from a first electronic device, wherein the configuration file comprises: a first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and a second corresponding relation between each pixel block in the finally generated combined image and the corresponding position information of the original image;
determining N groups of M target pixel blocks corresponding to the serial numbers of the N frames of original images from the finally generated combined image according to the first corresponding relation;
according to the second corresponding relation, determining the position information of each group of M target pixel blocks in the corresponding original image;
and arranging and combining each group of M target pixel blocks according to the position information to generate the N frames of original images, and restoring animation according to the N frames of original images.
In the embodiment of the application, the animation can be restored by the second electronic device through the configuration file and the combined image, the ROM occupied by the combined image and the configuration file is reduced, and a certain ROM space can be saved.
In a fifth aspect, an embodiment of the present application provides a first electronic device, which includes a processor and a memory. Wherein the memory is used to store one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the electronic device to implement any of the possible design methodologies of any of the aspects described above.
In a sixth aspect, an embodiment of the present application provides a second electronic device, including a processor, a read only memory, a random access memory, a touch panel, and a display;
the processor is used for acquiring a configuration file and a finally generated combined image from the first electronic equipment; the configuration file includes: a first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and a second corresponding relation between each pixel block in the finally generated combined image and the corresponding position information in the original image;
the read-only memory is used for storing one or more computer programs, the configuration file and the finally generated combined image;
when the touch panel transmits the detected operation of the user on playing the animation to the display, the processor determines N groups of M target pixel blocks corresponding to the serial numbers of the N frames of original images from the finally generated combined image according to the first corresponding relation; according to the second corresponding relation, determining the position information of each group of M target pixel blocks in the corresponding original image; arranging and combining each group of M target pixel blocks according to the position information to generate the N frames of original images, and restoring animation according to the N frames of original images;
the random access memory is used for storing the N frames of original images and the animation; the display is used for displaying the animation.
In a seventh aspect, the present application also provides an apparatus including a module/unit for performing the method of any one of the possible designs of any one of the above aspects. These modules/units may be implemented by hardware, or by hardware executing corresponding software.
In an eighth aspect, this embodiment further provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program is run on an electronic device, the electronic device is caused to perform any one of the possible design methods of the foregoing aspects.
In a ninth aspect, the present application further provides a computer program product, which when run on an electronic device, causes the electronic device to execute any one of the possible design methods of any one of the above aspects.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
Fig. 1 is a schematic diagram of an interconnection scenario provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an android operating system provided in the embodiment of the present application;
FIG. 4 is a block diagram of a set of frame images of a fingerprint animation according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a set of blocks of fingerprint animation difference pixels according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a merged image according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a comparison method of sequential frame images according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a sequential frame image dividing method according to an embodiment of the present application;
fig. 10 is a schematic diagram of another image processing method according to an embodiment of the present application;
fig. 11 is a schematic diagram of another image processing method according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating another sequential frame image comparison method according to an embodiment of the present application;
FIG. 13 is a schematic diagram of another merged image provided in the embodiments of the present application;
fig. 14 is a schematic diagram of another image processing method according to an embodiment of the present application;
FIG. 15 is a schematic diagram of another merged image provided by embodiments of the present application;
fig. 16 is a schematic diagram of an image processing system according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Detailed Description
For ease of understanding, examples are given in part to illustrate concepts related to embodiments of the present application. As follows:
a sequence frame is a sequence of multiple consecutive still pictures.
Sequential frame animation, also known as frame-by-frame animation, is a technique for realizing video animation effect by using multiple continuous static pictures to be rapidly switched.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. Hereinafter, the first language and the second language refer to natural language, and the first speech and the second speech refer to sound made by a user. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
The image processing method provided by the embodiment of the application can be applied to the communication architecture shown in fig. 1. The communication architecture includes: a plurality of electronic devices 100 and 200. The electronic device 200 may process a plurality of frames of images in the animation into one or more frames of merged pictures, and a configuration file, and the electronic device 100 may obtain the configuration file and the merged images from the electronic device 200 and restore the animation.
The electronic device 100 and the electronic device 200 are interconnected through a communication network, which may be a local area network or a wide area network switched through a relay (relay) device. When the communication network is a local area network, the communication network may be a wifi hotspot network, a wifi P2P network, a bluetooth network, a zigbee network, or a Near Field Communication (NFC) network, for example. When the communication network is a wide area network, the communication network may be, for example, a third generation mobile communication technology (3rd-generation wireless telephone technology, 3G) network, a fourth generation mobile communication technology (4G) network, a fifth generation mobile communication technology (5th-generation mobile communication technology, 5G) network, a Public Land Mobile Network (PLMN) for future evolution, the internet, or the like. In the scenario shown in fig. 1, data, such as interactive pictures, texts, and videos, or results of processing objects such as pictures, texts, or videos by the interactive electronic device, may be interacted between different electronic devices through a communication network.
In some embodiments of the present application, FIG. 1 showsThe electronic device 100 and the electronic device 200 of (1) may be portable electronic devices such as mobile phones, tablet computers, wearable devices with wireless communication capabilities (e.g., smart watches), etc., that also contain other functions such as personal digital assistant and/or music player functions. Exemplary embodiments of the portable electronic device include, but are not limited to, a mount
Figure BDA0001856104810000051
Or other operating system. The portable electronic device may also be other portable electronic devices such as laptop computers (laptop) with touch sensitive surfaces (e.g., touch panels), etc. It should also be understood that in some other embodiments of the present application, the electronic device 100 may not be a portable electronic device, but may be a desktop computer with a touch-sensitive surface (e.g., a touch panel).
Exemplarily, as shown in fig. 2, the following takes the electronic device 100 as an example to specifically describe the embodiment.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display 194, a SIM card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via a USB interface. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna module 1, the antenna module 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the cellular network antenna may be multiplexed into a wireless local area network diversity antenna. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLAN), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves. In this embodiment, the wireless communication module 160 is configured to transmit the voice of the first user and the machine voice of the target language after the translation, or transmit the voice of the target language and the machine voice of the first language after the translation from the opposite user.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be an LCD (liquid crystal display), an OLED (organic light-emitting diode), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens, N being a positive integer greater than 1. In the embodiment of the present application, the display screen 194 may be used to display a call interface, a translation interface, a language setting interface, and the like.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. Further, the memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call. In the embodiment of the present application, the speaker 170A is used to play the machine voice after the translation.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones to collect sound signals and reduce noise, and may further identify sound sources and perform directional recording functions. In the embodiment of the present application, the microphone 170C may be used for collecting the voice of the user, for example, the first voice of the first language of the first user.
The headphone interface 170D is used to connect a wired headphone. The earphone interface may be a USB interface, or may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". May be disposed on the display screen 194. For detecting a touch operation acting thereon or thereabout. The detected touch operation may be passed to the application processor to determine the type of touch event and provide a corresponding visual output via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194. In the embodiment of the application, the touch panel is used for receiving touch operations such as a first operation, a confirmation operation of the language setting control, a closing operation, and an exit operation.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in the headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a Subscriber Identity Module (SIM). The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into and pulled out of the SIM card interface. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100. The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 3 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include phone, camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the prior art, a boot animation in a terminal system is usually stored in a Read Only Memory (ROM) in the form of a sequence frame image, and a general animation may include several frames or even tens of frames of images. Assuming that the size of each frame image is the same, the storage size is M kb, the number of frames is N, and the whole fingerprint animation occupies the ROM with the size of (M x N) kb. Because the pixels of most areas of two adjacent frames of images in most animations are the same, the proportion of different pixels is very small. If the same pixel portion in the adjacent frame is removed, the total size can be theoretically optimized to (M + (M/10) × (N-1)) kb regardless of the byte occupancy in the digital encoding header of the image. Therefore, the embodiment of the present application provides an image processing method, which can implement that a server side preprocesses an animation into one or more merged images and configuration files, and a terminal side obtains the one or more merged images and configuration files from the server side and then can restore the animation, so that the ROM occupied by the one or more merged images and the configuration files is significantly reduced compared with a sequential frame image, and therefore, the ROM space can be saved to a certain extent.
For example, when a user enters a setting menu of a mobile phone and adds a new fingerprint, the user presses a fingerprint key once, and a fingerprint animation is played on a mobile phone interface correspondingly, where the fingerprint animation is composed of multiple frames of sequential frame images, as shown in fig. 4, the fingerprint animation includes ten consecutive frames of images, and the implementation manner in the prior art usually stores the ten frames of images of the fingerprint animation in a ROM, and occupies a relatively large amount of memory. In this embodiment of the application, the first electronic device may process ten frames of images in the fingerprint animation in advance to generate a plurality of combined images and a configuration file, and then the first electronic device sends the plurality of combined images and the configuration file to the second electronic device, so that the second electronic device restores the fingerprint animation. For example, the first electronic device may cut each of ten frames of images into a plurality of pixel blocks in the same manner, and then sequentially compare whether pixel blocks at the same positions in a subsequent frame of image and an adjacent previous frame of image are different. As shown in fig. 5, fig. 5a in fig. 5 is a different pixel block (the pixel block in the block is a different pixel block) determined after the second frame image in the fingerprint animation is compared with the first frame image, fig. 5b is a different pixel block (the pixel block in the block is a different pixel block) determined after the third frame image in the fingerprint animation is compared with the second frame image, and so on, until fig. 5c is a different pixel block (the pixel block in the block is a different pixel block) determined after the ninth frame image in the fingerprint animation is compared with the eighth frame image, and fig. 5d is a different pixel block (the pixel block in the block is a different pixel block) determined after the tenth frame image in the fingerprint animation is compared with the ninth frame image.
Further, the first electronic device writes the pixel blocks in the determined block into a merged image that includes all the pixel blocks of the first frame image in advance. In this way, all pixel blocks of the first frame image and all pixel blocks of the boxes are finally generated, as shown in fig. 6, while the configuration file records which frames each pixel block of the merged image is, and at which positions of these frames, respectively, so that the second electronic device can restore the fingerprint animation according to the configuration file and the merged image.
The image processing method provided by the embodiment of the present application will be described in detail below with reference to the accompanying drawings and application scenarios. The following embodiments may be implemented in the electronic device 100 having the above-described hardware structure.
Referring to fig. 7, a flow of a first image processing method provided in an embodiment of the present application is exemplarily shown, and the method is executed by a first electronic device. The first electronic device may be a mobile phone, a tablet computer, a personal computer, or the like.
In step 201, the first electronic device divides N frames of original images in the animation into M pixel blocks in the same manner.
Illustratively, as shown in fig. 8, the first electronic device divides the first frame image, the second frame image and the third frame image in animation into 24 pixel blocks in the same manner, wherein the positions of the pixel blocks in different frame images are the same, for example, the coordinate position of the pixel block a of the first frame is the same as the coordinate position of the pixel block a2 of the second frame, and the coordinate position of the pixel block a3 of the third frame, the coordinate position of the pixel block B of the first frame is the same as the coordinate position of the pixel block B2 of the second frame, and the coordinate position of the pixel block B3 of the third frame, and so on. Or for example, the index value of the pixel block a of the first frame is the same as the index value of the pixel block a2 of the second frame, and the index value of the pixel block a3 of the third frame.
Step 202, the first electronic device writes M pixel blocks of a first frame original image in the N frame original images into a merged image. Wherein the M pixel blocks are all pixel blocks of the original image of the first frame.
For example, 24 pixel blocks in the first frame image in fig. 8 are written into the merged image.
Step 203, the first electronic device executes the following operations for the K-th frame original image, which is any one of the rest N-1 frame original images:
and traversing a first pixel block in the K-th frame original image, writing the first pixel block into the merged image when the first pixel block in the K-th frame original image is different from the pixel block at the same position in the K-1-th frame original image, and otherwise, not writing the first pixel block into the merged image.
And when a second pixel block in the K-th frame original image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image.
Returning to execute the traversal of the third pixel block in the K frame original image until all the M pixel blocks in the K frame original image are traversed; wherein M, N and K are positive integers greater than 1.
By way of example in fig. 8, the first electronic device first traverses the 24 pixel blocks a2 to X2 in the second frame image in sequence, where a2 pixel block is the same as a pixel block a of the first frame image, B2 pixel block is different from B pixel block of the first frame image, the first electronic device writes B2 pixel block into the merged image, C2 pixel block is the same as C pixel block of the first frame image, D2 pixel block is the same as D pixel block of the first frame image, and so on until X2 pixel block is the same as X pixel block of the first frame image. Finally the first electronic device writes b2, g2, and i2 of the second frame image into the merged image. Then, the first electronic device traverses again the 24 pixel blocks a3 to x3 in the third frame image, wherein the pixel block a3 is the same as the pixel block a2 of the second frame image, the pixel block b3 is different from the pixel block b2 of the second frame image, the first electronic device writes the pixel block b3 into the merged image, the pixel block c3 is the same as the pixel block c2 of the second frame image, the pixel block d3 is the same as the pixel block d2 of the second frame image, and so on until the pixel block x3 is the same as the pixel block x2 of the second frame image. Finally, the first electronic device writes b3, g3, i3, j3, o3, and r3 of the third frame image into the merged image. The merged image thus finally generated, as shown in fig. 8, pixel blocks in the merged image include a to X of the first frame image, and b2, g2, and i2 of the second frame image, and b3, g3, i3, j3, o3, and r3 of the third frame image.
It should be noted that the N original images in the images of the embodiment of the present application may be various types of images, such as RGB images; YUV images, RGBA images, YUVA images, and the like. The RGB image includes: a red R component, a green G component, and a blue B component. The RGBA image includes: a red R component, a green G component, a blue B component, and a transparency a component. The YUV image includes: a luminance Y component, a color difference U component, and a color difference V component. The YUVA image includes: a luminance Y component, a color difference U component, a color difference V component, and a transparency a component. Since each component represents a data value (also referred to as a component value) of a certain dimension of a pixel, and these component groups constitute pixel values, in this embodiment of the application, the first electronic device compares the pixel values of different pixel blocks, and if the pixel values of any one or more of the components are different, the pixel blocks are different.
Step 204, the first electronic device generates a configuration file, where the configuration file includes: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image.
Illustratively, the first electronic device assigns an index value to each pixel block position, and in fig. 8, the index values are (1,1), (1, 2) to (1, 4), (2,1), (2,2) to (2, 4) in order from top left to bottom right, and up to the last (6, 4). For example, the index value of the position of the pixel block b2 is (2,2), the index value of the position of the pixel block g2 is (2,3), and the index value of the position of the pixel block i2 is (3, 1). The configuration file finally generated by the first electronic device thus includes: in fig. 8, the correspondence relationship between each pixel block in the merged image and the first frame image, the second frame image, and the third frame image, and the index value of each pixel block in the image corresponding thereto are combined. Specifically, the configuration file includes: the pixel block a in the merged image in fig. 8 corresponds to the index value of the pixel a of the first frame image, the pixel a corresponds to the index value of the pixel a of the second frame image, the pixel a corresponds to the index value of the pixel block a2 of the second frame image, the pixel block a corresponds to the index value of the pixel block a3 of the third frame image, the pixel block B in the merged image in fig. 8 corresponds to the first frame image, the pixel block B corresponds to the index value of the pixel B of the first frame image, and so on until the pixel block r3 in the merged image in fig. 8 corresponds to the third frame image, and the pixel block r3 corresponds to the index value of the pixel r3 of the third frame image.
In a possible design, when the first electronic device is a personal computer, the first electronic device may directly send the finally generated combined image and configuration file to the second electronic device, or when receiving a call request of the second electronic device, send the finally generated combined image and configuration file to the second electronic device, so that the animation is restored by the second electronic device. When the first electronic device is a mobile phone or a tablet computer, the first electronic device can process the animation shared with the second electronic device into a combined image and a configuration file according to the method, and send the finally generated combined image and configuration file to the second electronic device, so that the ROM space of the second electronic device is saved.
In step 201, the pixel blocks generated by the first electronic device through division may be rectangular blocks, and the sizes of the pixel blocks of different animations are adjusted according to needs, that is, the size of the pixel block of the first electronic device should have a dynamic configuration capability, so that the first electronic device divides the sequence frame image through dynamically adjusting the size of the pixel block, and the benefit obtained by the whole system for processing the sequence frame can be optimized. Generally, the smaller the pixel block, the higher the contrast accuracy; as the smaller the pixel block is, the more the number of the pixel blocks divided by the whole image is, the more the configuration information quantity for describing the corresponding relation is, the larger the configuration file is, and the comprehensive benefit is relatively reduced. Therefore, the first electronic device needs to ensure accuracy and minimize the amount of configuration information describing the correspondence relationship. Assuming that the amount of configuration information describing the correspondence is referred to as "cost", factors affecting the "cost" include the resolution of the sequence frame images, the similarity between the sequence frames, the number of frames, and the like, and calculation is performed according to these parameters, so that an optimal value of the size of one pixel block can be found. And the configuration file comprises information of the width, the height, the size of the pixel block, the frame number of the animation and the like of the image.
In addition, the M pixel blocks in one frame image may be the same size or different sizes. Assuming that the resolution of the sequence frame image is W × H, the number of pixel blocks is divided, the horizontal direction is ω 1 — W/N (not divisible, rounded down), and the vertical direction is ω 2 — W/H (not divisible, rounded down); when the result of the W/N or H/N calculation is not an integer, according to the segmentation rule, a case may occur in which ω 1 × N < W (or ω 2 × N < H), that is, after the pixel blocks are rearranged, the total width is smaller than the sequence frame image width (or the total height is smaller than the sequence frame image height). Referring to fig. 9, when a sequence frame region cannot be divided by an integer multiple, as shown in fig. 9a, neither the width nor the height in fig. 9a can be divided by an integer multiple. The remaining regions are aligned with the right boundary, expanded from right to left, and divided into a series of pixel blocks as basic units, such as the rectangular regions shown on the right side of fig. 9 b.
It should be noted that, in step 204, the configuration file generated by the embodiment of the present application is generally in JSON format. Besides, the configuration file can be in JSON, XML and other formats, and developers can design the formats by themselves in order to keep the configuration file to have enough compression rate. The configuration file and the merged image can be sent to the second electronic device, stored on the second electronic device, or stored on the first electronic device, and when the second electronic device needs to play the animation, the request is sent to the first electronic device through the network, and the configuration file and the finally generated merged image are obtained from the first electronic device. In contrast, storing the configuration file and the merged image on the first electronic device can minimize the ROM space occupied by the second electronic device.
In a possible design, when the number of original images of the animation is large, or the similarity between the sequence frames is too low, the merged image generated by the first electronic device is large, which directly results in that when the second electronic device restores the animation, the time consumed for loading the merged image for the first time is large, and the animation playing effect is affected. Therefore, the first electronic device can generate a plurality of frames of merged images, each frame of merged images comprises a part of pixel blocks, so that the second electronic device can complete the initial animation playing only by loading part of the merged images when the animation is initially played, and the second electronic device can continuously load the other subsequent frames of merged images in the animation playing process. As the ROM space occupied by the single combined image becomes smaller, the time consumed by the second electronic equipment for loading the memory is reduced, and therefore the animation loading efficiency is improved.
In one possible design, when the first electronic device is a PC
Referring to fig. 10, a flow of a second image processing method provided in an embodiment of the present application is exemplarily shown, and the method is executed by the second electronic device side.
Step 301, obtaining a configuration file and a finally generated combined image from a first electronic device, where the configuration file includes: the method comprises a first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and a second corresponding relation between the position information of each pixel block in the finally generated combined image and the corresponding original image.
Step 302, according to the first corresponding relationship, determining N groups of M target pixel blocks corresponding to the serial numbers of the N frames of original images from the finally generated merged image.
Step 303, determining the position information of each group of M target pixel blocks in the corresponding original image according to the second corresponding relationship.
And 304, arranging and combining the M target pixel blocks in each group according to the position information to generate the N frames of original images, and restoring animation according to the N frames of original images.
Illustratively, assume that the second electronic device first acquires the merged image in fig. 8 and the configuration file corresponding to the merged image from the first electronic device. And then the second electronic equipment determines that the pixel block corresponding to the first frame image comprises a pixel block A to a pixel block X from the merged image according to the first corresponding relation in the configuration file, and determines that the pixel block corresponding to the second frame image comprises a pixel block A, a pixel block b2, a pixel block C, a pixel block D to a pixel block f, a pixel block g2, a pixel block H to a pixel block X from the merged image. And analogizing in sequence until the pixel blocks corresponding to the third frame image are determined to be pixel block A to pixel block F, pixel block g3, pixel block H, pixel block i3, pixel block j3, pixel block H to pixel block N, pixel block o3, pixel block P to pixel block Q, pixel block r3, and pixel block S to pixel block X from the merged image. Further, the second electronic device determines, according to the second correspondence in the configuration file, that the pixel block corresponding to the first frame image has positional information of the pixel block a to the pixel block X in the first frame image, for example, index values (1,1), (1, 2) to (1, 4), (2,1), (2,2) to (2, 4) in this order. And determining the position information of the 24 pixel blocks corresponding to the second frame image in the second frame image, and so on until determining the position information of the 24 pixel blocks corresponding to the third frame image in the third frame image.
Furthermore, the second electronic device arranges and combines the first frame image, the second frame image and the third frame image according to the position information, so that the animation is finally restored according to the first frame image, the second frame image and the third frame image.
For another example, the second electronic device may restore the images of the 1 st frame to the 10 th frame shown in fig. 4 according to the merged image shown in fig. 6 and the corresponding relationship in the configuration file, and finally restore the fingerprint animation composed of the images of the 10 frames shown in fig. 4. The second electronic device may store the restored 10 frames of images in a Random Access Memory (RAM).
Referring to fig. 11, a flowchart of a third image processing method provided in an embodiment of the present application is exemplarily shown, and the method is executed by a first electronic device.
Step 401, dividing the N frames of original images in the animation into M pixel blocks in the same manner.
Illustratively, as shown in fig. 12, the first electronic device divides the first frame image, the second frame image, the third frame image, and the fourth frame image in the animation into 24 pixel blocks in the same manner.
Step 402, writing M pixel blocks of any one frame of original image in the N frames of original images into a merged image.
For example, 24 pixel blocks in the first frame image in fig. 12 are written into the merged image.
Step 403, traversing the remaining N-1 frames of original images, and performing a setting operation on each original image in the N-1 frames of original images, where the setting operation includes:
and traversing a first pixel block in a first image, writing the first pixel block into the merged image when the first pixel block in the first image is different from all the pixel blocks in the merged image, and otherwise, not writing the first pixel block into the merged image.
And when the second pixel block in the first image is different from all the pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image. And returning to execute the third pixel block in the first image in a traversing manner until all the M pixel blocks in the first image are traversed.
Returning to execute a third pixel block in a second image in the N-1 frame original image according to the setting operation, writing the third pixel block into the current latest combined image when the third pixel block in the second image is different from all pixel blocks in the current latest combined image, and otherwise, not writing the third pixel block into the current latest combined image; and returning to execute the fourth pixel block in the second image until all the M pixel blocks in the second image are traversed.
And returning and executing M pixel blocks in the third image in the N-1 frame original image according to the setting operation until all the images in the N-1 frame original image are traversed.
Taking fig. 12 as an example, the first electronic device first traverses 24 pixel blocks a2 to x2 in the second frame image in sequence, wherein the first electronic device compares the pixel block a2 to be the same as the pixel block a of the current merged image, the first electronic device does not write the pixel block a2 into the merged image, and records this information in the configuration file (the pixel block a2 in the second frame image is the same as the pixel block a of the current latest merged image); the first electronic device compares the pixel block b2 with all pixel blocks of the current combined image, the first electronic device writes the pixel block b2 into the combined image, and records the position information of the pixel block b2 in the second frame image in the combined image in a configuration file, and at this time, the latest combined image at present also comprises the pixel block b2 besides all the pixel blocks of the first frame image; the first electronic device compares the pixel block C2 to be the same as the pixel block C of the current merged image, the first electronic device does not write the pixel block C2 into the merged image and records this information in the configuration file (the pixel block C2 in the second frame image is the same as the pixel block C of the current latest merged image), and so on until the first electronic device compares X2 to be the same as the pixel block X of the current latest merged image, the first electronic device does not write the pixel block X2 into the merged image and records this information in the configuration file (the pixel block X2 in the second frame image is the same as the pixel block X of the current latest merged image).
Further, the first electronic device sequentially traverses the 24 pixel blocks of a3 through x3 in the third frame image and the 24 pixel blocks of a4 through x4 in the fourth frame image again. The specific processing manner is similar to that of the second frame image, and is not described herein again.
In this way, as shown in fig. 13, the merged image finally generated by the first electronic device includes, in addition to all the pixel blocks of the first frame image, a pixel block b2, a pixel block g2, a pixel block i2, a pixel block g34, and a pixel block r 4.
Step 404, generating a configuration file, wherein the configuration file comprises: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image.
Illustratively, the first electronic device assigns an index value to the position of each pixel block in the newly generated merged image and the four-frame original image, and the index values of each pixel block of the four-frame original image are (1,1), (1, 2) to (1, 4), (2,1), (2,2) to (2, 4) in order from top left to bottom right, up to the last one (6, 4), in fig. 12. For example, the index value of the position of the pixel block b2 is (2,2), the index value of the position of the pixel block g2 is (2,3), and the index value of the position of the pixel block i2 is (3, 1). The configuration file finally generated by the first electronic device thus includes: the correspondence relationship of each pixel block in the finally generated merged image in fig. 13 with the first frame image, the second frame image, the third frame image, and the fourth frame image, and the index value of each pixel block in the image corresponding thereto. Specifically, the configuration file includes: the pixel block a in the merged image in fig. 13 corresponds to the first frame image, the second frame image, the third frame image, and the fourth frame image, and the pixel block a corresponds to the index value of the pixel a of the first frame image, the pixel a corresponds to the index value of the pixel block a2 of the second frame image, the pixel block a corresponds to the index value of the pixel block a3 of the third frame image, and the pixel block a corresponds to the index value of the pixel block a4 of the fourth frame image. The pixel block B in the merged image in fig. 13 corresponds to the first frame image, the second frame image and the fourth frame image, and the pixel block B corresponds to the index value of the pixel B of the first frame image, and corresponds to the index value of the pixel B3 of the third frame image, and corresponds to the index value of the fourth frame image B4, and so on, until the pixel block X4 in the merged image in fig. 13 corresponds to the first frame image, the second frame image, the third frame image and the fourth frame image, and the pixel block X4 corresponds to the index value of the pixel X of the first frame image, and the index value of the pixel X2 of the second frame image, and the index value of the pixel X3 of the third frame image, and the index value of the pixel X4 of the fourth frame image.
In a possible design, the first electronic device may directly send the finally generated combined image and configuration file to the second electronic device, or may send the finally generated combined image and configuration file to the second electronic device when receiving a call request of the second electronic device, so that the animation is restored by the second electronic device.
In addition, the first electronic device may select all pixel blocks of any one frame image from the four frame images to be written into the merged image, and the first electronic device may select to compare the pixel blocks of the frame image with all pixel blocks in the newly generated merged image, or may select to compare the pixel blocks of the frame image with all pixel blocks in the frame image that has already been compared.
In a possible design, the first electronic device may directly send the finally generated combined image and configuration file to the second electronic device, or may send the finally generated combined image and configuration file to the second electronic device when receiving a call request of the second electronic device, so that the animation is restored by the second electronic device.
It should be noted that, in step 401, the manner of dividing the generated pixel blocks and the format of the configuration file by the first electronic device are the same as the implementation of the first method, and are not described herein again.
In a possible design, when the number of original images of the animation is large, or the similarity between the sequence frames is too low, the merged image generated by the first electronic device is large, which directly results in that when the second electronic device restores the animation, the time consumed for loading the merged image for the first time is large, and the animation playing effect is affected. Therefore, the first electronic device can generate a plurality of frames of merged images, each frame of merged images comprises a part of pixel blocks, so that the second electronic device can complete the initial animation playing only by loading part of the merged images when the animation is initially played, and the second electronic device can continuously load the other subsequent frames of merged images in the animation playing process. As the ROM space occupied by the single combined image becomes smaller, the time consumed by the second electronic equipment for loading the memory is reduced, and therefore the animation loading efficiency is improved.
Similarly, after the second electronic device obtains the configuration file and the finally generated merged image from the first electronic device, the process of restoring the animation is consistent with the second image processing method shown in fig. 10, and details are not repeated here.
Referring to fig. 14, a flow of a fourth image processing method provided in the embodiment of the present application is exemplarily shown, and the method is executed by the first electronic device.
Step 501, dividing N original images in the animation into M pixel blocks according to the same mode.
Step 502, writing M pixel blocks of a first image into a merged image, where the first image is any one of the N original images.
For example, the first image is the first frame image in fig. 12, and the first electronic device writes the 24 pixel blocks in the first frame image in fig. 12 into the merged image.
Step 503, traversing the other N-1 frames of original images, and executing a setting operation for each original image in the N-1 frames of original images, where the setting operation includes:
and traversing a first pixel block in the second image, writing the first pixel block into the merged image when the first pixel block in the second image is different from all the pixel blocks in the first image, and otherwise, not writing the first pixel block into the merged image.
And when a second pixel block in the second image is different from all pixel blocks in the first image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image. And returning to execute the third pixel block in the second image in a traversing manner until all the M pixel blocks in the second image are traversed.
Returning to execute a fourth pixel block in a third image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the third image is different from all pixel blocks in the first image and all pixel blocks in the second image, otherwise, not writing the fourth pixel block into the current latest combined image; and returning to execute the fifth pixel block in the third image until all the M pixel blocks in the third image are traversed.
And returning and executing M pixel blocks in the fourth image in the N-1 frame original image according to the setting operation until all the images in the N-1 frame original image are traversed.
By way of example in fig. 12, the first electronic device first traverses 24 pixel blocks a2 to x2 in the second frame image in sequence, wherein the first electronic device compares whether the pixel block a2 is the same as all the pixel blocks of the first frame image, determines that the pixel block a2 is the same as the pixel block a of the first frame image, and the first electronic device does not write the pixel block a2 into the merged image and records this information in the configuration file (the pixel block a2 in the second frame image is the same as the pixel block a of the first frame image); the first electronic device compares the pixel block b2 with all the pixel blocks of the pixel block of the first frame image, the first electronic device writes the pixel block b2 into the merged image, and records the position information of the pixel block b2 in the second frame image in the merged image in the configuration file, and at this time, the latest merged image at present comprises the pixel block b2 besides all the pixel blocks of the first frame image; the first electronic device compares the pixel block C2 with all the pixel blocks of the first frame image, determines that the pixel block C2 is the same as the pixel block C of the first frame image, the first electronic device does not write the pixel block C2 into the merged image and records this information in the configuration file (the pixel block C2 in the second frame image is the same as the pixel block C of the first frame image), and so on until the first electronic device compares X2 to be the same as the pixel block X in the first frame image, the first electronic device does not write the pixel block X2 into the merged image and records this information in the configuration file (the pixel block X2 in the second frame image is the same as the pixel block X of the first frame image).
Further, the first electronic device traverses the 24 pixel blocks a3 to x3 in the third frame image again in sequence, wherein the first electronic device compares whether the pixel block a2 is the same as all the pixel blocks of the first frame image and the second frame image, determines that the pixel block a3 is the same as the pixel block a (or the pixel block a2) of the first frame image, does not write the pixel block a3 into the merged image, and records the information in the configuration file (the pixel block a3 in the third frame image is the same as the pixel block a of the first frame image); the first electronic device compares the block of pixels B3 with all the pixels of the first and second frame images, the block of pixels B3 being the same as the block of pixels B2 in the second frame image, the first electronic device does not write the block of pixels B3 into the merged image and records this information in the configuration file (the block of pixels B3 in the third frame image is the same as the block of pixels B of the first frame image); until the first electronic device comparison X3 is the same as the pixel block X in the first frame image, the first electronic device does not write pixel block X3 into the merged image and records this information in the configuration file (pixel block X3 in the third frame image is the same as pixel block X in the first frame image).
And so on, and 24 pixel blocks a4 through x4 in the fourth frame image. The specific processing manner is the same as above, and is not described herein again.
In this way, as shown in fig. 15, the merged image finally generated by the first electronic device includes, in addition to all the pixel blocks of the first frame image, a pixel block b2, a pixel block g2, a pixel block i2, a pixel block g3, a pixel block i3, and a pixel block r 4.
Step 404, the first electronic device generates a configuration file, where the configuration file includes: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image.
Illustratively, the first electronic device assigns an index value to the position of each pixel block in the newly generated merged image and the four-frame original image, and the index values of each pixel block of the four-frame original image are (1,1), (1, 2) to (1, 4), (2,1), (2,2) to (2, 4) in order from top left to bottom right, up to the last one (6, 4), in fig. 12. For example, the index value of the position of the pixel block b2 is (2,2), the index value of the position of the pixel block g2 is (2,3), and the index value of the position of the pixel block i2 is (3, 1). The configuration file finally generated by the first electronic device thus includes: the correspondence relationship of each pixel block in the finally generated merged image in fig. 15 with the first frame image, the second frame image, the third frame image, and the fourth frame image, and the index value of each pixel block in the image corresponding thereto. Specifically, the configuration file includes: the pixel block a in the merged image in fig. 15 corresponds to the first frame image, the second frame image, the third frame image, and the fourth frame image, and the pixel block a corresponds to the index value of the pixel a of the first frame image, the pixel a corresponds to the index value of the pixel block a2 of the second frame image, the pixel block a corresponds to the index value of the pixel block a3 of the third frame image, and the pixel block a corresponds to the index value of the pixel block a4 of the fourth frame image. The pixel block B in the merged image in fig. 15 corresponds to the first frame image, the second frame image, and the fourth frame image, and the pixel block B corresponds to the index value of the pixel B of the first frame image, and corresponds to the index value of the pixel B3 of the third frame image, and corresponds to the index value of the fourth frame image B4, and so on, until the pixel block X4 in the merged image in fig. 13 corresponds to the first frame image, the second frame image, the third frame image, and the fourth frame image, and the pixel block X4 corresponds to the index value of the pixel X of the first frame image, and the index value of the pixel X2 of the second frame image, and the index value of the pixel X3 of the third frame image, and the index value of the pixel X4 of the fourth frame image.
In a possible design, the first electronic device may directly send the finally generated combined image and configuration file to the second electronic device, or may send the finally generated combined image and configuration file to the second electronic device when receiving a call request of the second electronic device, so that the animation is restored by the second electronic device.
In addition, the first electronic device may select all pixel blocks of any one frame image from the four frame images to be written into the merged image, and the first electronic device may select to compare the pixel blocks of the frame image with all pixel blocks in the newly generated merged image, or may select to compare the pixel blocks of the frame image with all pixel blocks in the frame image that has already been compared.
In a possible design, the first electronic device may directly send the finally generated combined image and configuration file to the second electronic device, or may send the finally generated combined image and configuration file to the second electronic device when receiving a call request of the second electronic device, so that the animation is restored by the second electronic device.
It should be noted that, in step 501, the manner of dividing the generated pixel blocks and the format of the configuration file by the first electronic device are the same as the implementation of the first method, and are not described herein again.
Compared with the first method provided by the embodiment of the present application, the size of the merged image can be reduced to some extent, but compared with the second method provided by the embodiment of the present application, and the same pixel blocks may exist in the merged image, the merged image generated by the second method occupies the smallest ROM, and the benefit of saving space is the greatest.
An embodiment of the present application further provides an image processing system, which is composed of a first electronic device and a second electronic device, as shown in fig. 16, where the first electronic device includes an image merging module 601, and the second electronic device includes a sequence frame restoring module 602 and an animation playing module 603.
The merging module 601 is configured to process a sequence frame image of an animation into a merged image and a configuration file according to the first image processing method, the third image processing method, or the fourth image processing method provided in the embodiment of the present application.
The sequence frame restoring module 602 is configured to restore a plurality of frames of original images according to the merged image and the configuration file according to the second image processing method provided in the embodiment of the present application.
And the animation playing module 603 is configured to read multiple frames of original images output by the sequence frame restoring module 602, and encapsulate the multiple frames of original images into a sequence frame animation according to a requirement.
Therefore, the second electronic device can restore the animation only by configuring the file and combining the images, and the occupied ROM space is obviously reduced compared with the prior art that the sequence frame is directly played by polling.
The sequence frame restoring module 602 and the playing processing module 603 may execute in sequence, that is, the sequence frame restoring module 602 restores all sequence frames first, and the playing processing module 603 processes the sequence frame animation by using the conventional sequence frame technology. In addition, the sequence frame restoring module 602 may also execute synchronously with the playing processing module 603, and the sequence frame restoring module 602 outputs a frame to the playing processing module 603 every time it restores a frame, so that the two modules may run synchronously, thereby avoiding the problem of waiting for playing animation due to long processing time of the sequence frame restoring module, and making interactivity more friendly.
The image processing method provided by the embodiment of the application can be applied to all scenes related to sequence frame animation playing in the mobile phone, a large amount of ROM space can be saved for the mobile phone by utilizing the technology, and the saved space is convenient for a system to run other important functions.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device is caused to execute any one of the possible implementations of the information processing method.
Embodiments of the present application further provide a computer program product, which, when running on an electronic device, enables the electronic device to execute any one of the possible implementations of the information processing method.
In other embodiments of the present application, embodiments of the present application disclose a first electronic device, as shown in fig. 17, the first electronic device may include: a processor 701; a memory 702; and one or more computer programs 703, which may be connected via one or more communication buses 704.
Wherein the one or more computer programs 703 are stored in the memory 702 and configured to be executed by the processor 701, the one or more computer programs 703 comprising instructions which may be used to perform the steps as in the respective embodiments of fig. 7, 11 and 14. Specifically, processor 701 is configured to perform step 201 and step 204 in fig. 7, processor 701 is configured to perform step 401 and step 404 in fig. 11, and processor 701 is configured to perform step 501 and step 504 in fig. 14. The merged image and configuration file generated by processor 701 are stored in memory 702.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a first electronic device, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method applied to a first electronic device, the method comprising:
dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of a first frame original image in the N frames of original images into a combined image;
aiming at a Kth frame original image, wherein the Kth frame original image is any one of the rest N-1 frame original images, the following operations are executed:
traversing a first pixel block in a K-th frame original image, writing the first pixel block into the merged image when the first pixel block in the K-th frame original image is different from a pixel block at the same position in a K-1-th frame original image, otherwise, not writing the first pixel block into the merged image;
when a second pixel block in the K-th frame original image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image;
returning to execute the traversal of the third pixel block in the K frame original image until all the M pixel blocks in the K frame original image are traversed; wherein K is a positive integer greater than 1;
generating a configuration file, the configuration file comprising: and the first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the Kth frame original image, and the second corresponding relation between the position information of each pixel block in the finally generated combined image and the position information of the pixel block in the Kth frame original image.
2. The method of claim 1, further comprising:
and sending the configuration file and the finally generated combined image to second electronic equipment.
3. An image processing method applied to a first electronic device, the method comprising:
dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of any one frame of original image in the N frames of original images into a combined image;
performing a setting operation for each of N-1 frames of original images, the setting operation including:
traversing a first pixel block in a first image in the N-1 frame original image, writing the first pixel block into the merged image when the first pixel block in the first image is different from all pixel blocks in the merged image, otherwise, not writing the first pixel block into the merged image;
when a second pixel block in the first image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image;
returning to execute the third pixel block in the first image in a traversing manner until all the M pixel blocks in the first image are completely traversed;
returning to execute a fourth pixel block in the second image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest merged image when the fourth pixel block in the second image is different from all pixel blocks in the current latest merged image, and otherwise, not writing the fourth pixel block into the current latest merged image;
returning to execute the fifth pixel block in the second image until all the M pixel blocks in the second image are traversed;
returning and executing M pixel blocks in a third image in the N-1 frame original image according to the setting operation until all images in the N-1 frame original image are traversed;
generating a configuration file, the configuration file comprising: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the N frames of original images and the position information of each pixel block in the finally generated combined image in the corresponding original image.
4. The method of claim 3, further comprising:
and sending the configuration file and the finally generated combined image to second electronic equipment.
5. An image processing method applied to a first electronic device, the method comprising:
dividing N frames of original images in the animation into M pixel blocks in the same mode;
writing M pixel blocks of a first image into a combined image, wherein the first image is any one of the N frames of original images;
performing a setting operation for each of N-1 frames of original images, the setting operation including:
traversing a first pixel block in a second image, writing the first pixel block into the merged image when the first pixel block in the second image is different from all the pixel blocks in the first image, otherwise, not writing the first pixel block into the merged image;
when a second pixel block in a second image is different from all pixel blocks in a first image, writing the second pixel block into a current latest combined image, otherwise, not writing the second pixel block into the current latest combined image, returning and traversing to execute a third pixel block in the second image until all the M pixel blocks in the second image are traversed;
returning to execute a fourth pixel block in a third image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the third image is different from all pixel blocks in the first image and all pixel blocks in the second image, otherwise, not writing the fourth pixel block into the current latest combined image; returning to execute the fifth pixel block in the third image until all the M pixel blocks in the third image are traversed;
returning and executing M pixel blocks in a fourth image in the N-1 frame original image according to the setting operation until all images in the N-1 frame original image are traversed;
generating a configuration file, the configuration file comprising: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the N frames of original images and the position information of each pixel block in the finally generated combined image in the corresponding original image.
6. The method of claim 5, further comprising:
and sending the configuration file and the finally generated combined image to second electronic equipment.
7. An image processing method applied to a second electronic device, the method comprising:
obtaining a configuration file and a finally generated combined image from a first electronic device, wherein the configuration file comprises: a first corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and a second corresponding relation between the position information of each pixel block in the finally generated combined image and the position information of the pixel block in the original image;
determining N groups of M target pixel blocks corresponding to the serial numbers of the N frames of original images from the finally generated combined image according to the first corresponding relation;
according to the second corresponding relation, determining the position information of each group of M target pixel blocks in the corresponding original image;
arranging and combining each group of M target pixel blocks according to the position information to generate the N frames of original images, and restoring animation according to the N frames of original images;
wherein the finally generated merged image is generated in the following way:
dividing N frames of original images in the animation into M pixel blocks in the same mode; writing M pixel blocks of a first frame original image in the N frames of original images into a combined image; aiming at a Kth frame original image, wherein the Kth frame original image is any one of the rest N-1 frame original images, the following operations are executed: traversing a first pixel block in a K-th frame original image, writing the first pixel block into the merged image when the first pixel block in the K-th frame original image is different from a pixel block at the same position in a K-1-th frame original image, otherwise, not writing the first pixel block into the merged image; when a second pixel block in the K-th frame original image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image; returning to execute the traversal of the third pixel block in the K frame original image until all the M pixel blocks in the K frame original image are traversed; wherein K is a positive integer greater than 1;
or dividing N frames of original images in the animation into M pixel blocks in the same mode; writing M pixel blocks of a first image into a combined image, wherein the first image is any one of the N frames of original images; performing a setting operation for each of N-1 frames of original images, the setting operation including: traversing a first pixel block in a second image, writing the first pixel block into the merged image when the first pixel block in the second image is different from all the pixel blocks in the first image, otherwise, not writing the first pixel block into the merged image; when a second pixel block in a second image is different from all pixel blocks in a first image, writing the second pixel block into a current latest combined image, otherwise, not writing the second pixel block into the current latest combined image, returning and traversing to execute a third pixel block in the second image until all the M pixel blocks in the second image are traversed; returning to execute a fourth pixel block in a third image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the third image is different from all pixel blocks in the first image and all pixel blocks in the second image, otherwise, not writing the fourth pixel block into the current latest combined image; returning to execute the fifth pixel block in the third image until all the M pixel blocks in the third image are traversed; and returning and executing M pixel blocks in the fourth image in the N-1 frame original image according to the setting operation until all the images in the N-1 frame original image are traversed.
8. A first electronic device comprising one or more processors and one or more memories;
the one or more memories coupled with the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the image processing method of any of claims 1 to 6.
9. A second electronic device is characterized by comprising a processor, a read-only memory, a random access memory, a touch panel and a display;
the processor is used for acquiring a configuration file and a finally generated combined image from the first electronic equipment; the configuration file includes: the corresponding relation between the serial number of each pixel block in the finally generated combined image and the serial number of the original image, and the position information of each pixel block in the finally generated combined image in the corresponding original image; wherein the finally generated merged image is generated in the following way:
dividing N frames of original images in the animation into M pixel blocks in the same mode; writing M pixel blocks of a first frame original image in the N frames of original images into a combined image; aiming at a Kth frame original image, wherein the Kth frame original image is any one of the rest N-1 frame original images, the following operations are executed: traversing a first pixel block in a K-th frame original image, writing the first pixel block into the merged image when the first pixel block in the K-th frame original image is different from a pixel block at the same position in a K-1-th frame original image, otherwise, not writing the first pixel block into the merged image; when a second pixel block in the K-th frame original image is different from all pixel blocks in the current latest combined image, writing the second pixel block into the current latest combined image, otherwise, not writing the second pixel block into the current latest combined image; returning to execute the traversal of the third pixel block in the K frame original image until all the M pixel blocks in the K frame original image are traversed; wherein K is a positive integer greater than 1;
or dividing N frames of original images in the animation into M pixel blocks in the same mode; writing M pixel blocks of a first image into a combined image, wherein the first image is any one of the N frames of original images; performing a setting operation for each of N-1 frames of original images, the setting operation including: traversing a first pixel block in a second image, writing the first pixel block into the merged image when the first pixel block in the second image is different from all the pixel blocks in the first image, otherwise, not writing the first pixel block into the merged image; when a second pixel block in a second image is different from all pixel blocks in a first image, writing the second pixel block into a current latest combined image, otherwise, not writing the second pixel block into the current latest combined image, returning and traversing to execute a third pixel block in the second image until all the M pixel blocks in the second image are traversed; returning to execute a fourth pixel block in a third image in the N-1 frame original image according to the setting operation, writing the fourth pixel block into the current latest combined image when the fourth pixel block in the third image is different from all pixel blocks in the first image and all pixel blocks in the second image, otherwise, not writing the fourth pixel block into the current latest combined image; returning to execute the fifth pixel block in the third image until all the M pixel blocks in the third image are traversed; returning and executing M pixel blocks in a fourth image in the N-1 frame original image according to the setting operation until all images in the N-1 frame original image are traversed;
the read-only memory is used for storing one or more computer programs, the configuration file and the finally generated combined image;
when the touch panel transmits the detected operation of the user on playing the animation to the display, the processor determines N groups of M target pixel blocks corresponding to the serial numbers of the N frames of original images from the finally generated combined image according to the first corresponding relation; according to the second corresponding relation, determining the position information of each group of M target pixel blocks in the corresponding original image; arranging and combining each group of M target pixel blocks according to the position information to generate the N frames of original images, and restoring animation according to the N frames of original images;
the random access memory is used for storing the N frames of original images and the animation;
the display is used for displaying the animation.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a computer program which, when run on an electronic device, causes the electronic device to perform the image processing method according to any one of claims 1 to 7.
CN201811315321.5A 2018-11-06 2018-11-06 Image processing method and electronic equipment Active CN109559270B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811315321.5A CN109559270B (en) 2018-11-06 2018-11-06 Image processing method and electronic equipment
PCT/CN2019/115553 WO2020093988A1 (en) 2018-11-06 2019-11-05 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811315321.5A CN109559270B (en) 2018-11-06 2018-11-06 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109559270A CN109559270A (en) 2019-04-02
CN109559270B true CN109559270B (en) 2021-12-24

Family

ID=65865965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811315321.5A Active CN109559270B (en) 2018-11-06 2018-11-06 Image processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN109559270B (en)
WO (1) WO2020093988A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559270B (en) * 2018-11-06 2021-12-24 华为技术有限公司 Image processing method and electronic equipment
CN113127161A (en) * 2019-12-30 2021-07-16 上海仙豆智能机器人有限公司 Processing engine selection method, processing engine selection device and computer-readable storage medium
CN112783839A (en) * 2020-06-08 2021-05-11 北京金山办公软件股份有限公司 Method and device for storing document, electronic equipment and storage medium
CN112052050B (en) * 2020-08-20 2021-06-08 腾讯科技(深圳)有限公司 Shared picture generation method, system, storage medium and terminal equipment
CN113286076B (en) * 2021-04-09 2022-12-06 华为技术有限公司 Shooting method and related equipment
CN113110802B (en) * 2021-04-16 2024-06-04 维沃移动通信有限公司 Image block processing method and device, electronic equipment and readable storage medium
CN113726950B (en) * 2021-06-10 2022-08-02 荣耀终端有限公司 Image processing method and electronic equipment
CN114363697B (en) * 2022-01-06 2024-04-26 上海哔哩哔哩科技有限公司 Video file generation and playing method and device
CN114090168A (en) * 2022-01-24 2022-02-25 麒麟软件有限公司 Self-adaptive adjusting method for image output window of QEMU (QEMU virtual machine)
CN116703741B (en) * 2022-09-27 2024-03-15 荣耀终端有限公司 Image contrast generation method and device and electronic equipment
CN116389898B (en) * 2023-02-27 2024-03-19 荣耀终端有限公司 Image processing method, device and storage medium
CN116842117B (en) * 2023-06-19 2024-03-12 重庆市规划和自然资源信息中心 Geous image output method based on geotools for repairing self-intersecting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1074574A (en) * 1992-01-14 1993-07-21 国际商业机器公司 Digital video compression method and device
CN101031085A (en) * 2007-03-30 2007-09-05 中国联合通信有限公司 Method for processing mobile-terminal frame carboon
CN103281538A (en) * 2013-06-06 2013-09-04 上海交通大学 Intra-frame coding method based on rolling hush and block-level intra-frame prediction
CN105025269A (en) * 2015-07-26 2015-11-04 杜春辉 Low-flow image transmission method
CN106559670A (en) * 2016-11-07 2017-04-05 湖南源信光电科技有限公司 A kind of improved piecemeal video compress perception algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559270B (en) * 2018-11-06 2021-12-24 华为技术有限公司 Image processing method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1074574A (en) * 1992-01-14 1993-07-21 国际商业机器公司 Digital video compression method and device
CN101031085A (en) * 2007-03-30 2007-09-05 中国联合通信有限公司 Method for processing mobile-terminal frame carboon
CN103281538A (en) * 2013-06-06 2013-09-04 上海交通大学 Intra-frame coding method based on rolling hush and block-level intra-frame prediction
CN105025269A (en) * 2015-07-26 2015-11-04 杜春辉 Low-flow image transmission method
CN106559670A (en) * 2016-11-07 2017-04-05 湖南源信光电科技有限公司 A kind of improved piecemeal video compress perception algorithm

Also Published As

Publication number Publication date
CN109559270A (en) 2019-04-02
WO2020093988A1 (en) 2020-05-14

Similar Documents

Publication Publication Date Title
CN109559270B (en) Image processing method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN109814766B (en) Application display method and electronic equipment
CN113726950B (en) Image processing method and electronic equipment
CN111190681A (en) Display interface adaptation method, display interface adaptation design method and electronic equipment
CN113722058B (en) Resource calling method and electronic equipment
CN112114912A (en) User interface layout method and electronic equipment
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN113961157B (en) Display interaction system, display method and equipment
CN113448382A (en) Multi-screen display electronic device and multi-screen display method of electronic device
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113254409A (en) File sharing method, system and related equipment
CN110989961A (en) Sound processing method and device
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
CN114089932A (en) Multi-screen display method and device, terminal equipment and storage medium
CN114995715B (en) Control method of floating ball and related device
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
CN114756184A (en) Collaborative display method, terminal device and computer-readable storage medium
CN112437341B (en) Video stream processing method and electronic equipment
CN110286975B (en) Display method of foreground elements and electronic equipment
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113438366A (en) Information notification interaction method, electronic device and storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN114691248A (en) Method, device and equipment for displaying virtual reality interface and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant