WO2021013019A1 - 一种图片处理方法及装置 - Google Patents

一种图片处理方法及装置 Download PDF

Info

Publication number
WO2021013019A1
WO2021013019A1 PCT/CN2020/102190 CN2020102190W WO2021013019A1 WO 2021013019 A1 WO2021013019 A1 WO 2021013019A1 CN 2020102190 W CN2020102190 W CN 2020102190W WO 2021013019 A1 WO2021013019 A1 WO 2021013019A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
picture data
gpu
picture
Prior art date
Application number
PCT/CN2020/102190
Other languages
English (en)
French (fr)
Inventor
谭威
孟坤
涂赟
王亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20844804.3A priority Critical patent/EP3971818A4/en
Priority to US17/627,951 priority patent/US20220292628A1/en
Publication of WO2021013019A1 publication Critical patent/WO2021013019A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • This application relates to the field of terminal technology, and in particular to an image processing method and device.
  • the central processing unit (CPU) of the mobile device needs to decode the picture into bitmap data, and save the bitmap data in the In the memory partition of the CPU, the CPU then converts the bitmap data belonging to the CPU memory partition into texture data, and transfers the texture data to the memory partition belonging to the Graphics Processing Unit (GPU), and finally uses the GPU to The texture data is drawn and displayed on the display screen of the mobile device.
  • CPU central processing unit
  • the image display process on the APP requires a block of CPU memory to store the bitmap data generated by decoding during data processing, and a block of GPU memory to store the texture data after the bitmap data is converted. . It can be seen that the above technical solutions need to allocate a block of memory to the CPU and a block of memory to the GPU, which is a waste of memory. Moreover, it takes a lot of time to convert bitmap data into texture data and upload it to the GPU memory. Stuttering is easy to occur in the process, and users are not smooth to view pictures, resulting in poor user experience.
  • This application provides an image processing method and device, which solves the problem of wasting memory in the processing of image display in the prior art, and the time-consuming lag caused by the time-consuming conversion of bitmap data into texture data and uploading to the GPU memory problem.
  • the first aspect provides an image processing method, which can be applied to electronic devices.
  • the method may include: the electronic device displays an interactive interface of an application, and when a user's preset operation on the interactive interface is detected, the application starts to acquire the image data to be displayed based on the input event, and decodes the image data to be displayed by the application Is bitmap data, and the bitmap data is encapsulated as texture data; the texture data is stored in the memory partition accessible by the graphics processor GPU; the GPU is triggered to read the texture data and perform rendering processing to obtain the rendered data; trigger the display according to Rendered data The rendered data shows the picture.
  • the preset operations include, but are not limited to, sliding, tapping, double-clicking, long-pressing, re-pressing, gestures in the air, moving the focus of the sight, etc.
  • a memory allocator first applies for a memory partition for the CPU to store bitmap data for the CPU to access, and then applies for a memory partition for the GPU to store texture data for the GPU to access.
  • the memory allocator defined by the electronic device is used to apply for a memory partition that stores texture data and is accessible to the graphics processor GPU.
  • the image data is decoded, the image data is decoded to generate The texture data is stored in the memory partition.
  • the method before decoding the image data to be displayed of the application into bitmap data, the method further includes: creating a software decoder; defining a memory allocator, which is used to apply for memory accessible by the GPU Partition, the memory partition accessible by the GPU is used to store texture data.
  • a software decoder defining a memory allocator, which is used to apply for memory accessible by the GPU Partition, the memory partition accessible by the GPU is used to store texture data.
  • the memory allocator is used to apply for the memory partition accessible to the GPU includes: the memory allocator is used to call the first interface to apply for the memory partition accessible to the GPU from the internal memory, and the first interface is for the GPU to apply for the memory partition Standard interface, the memory partition accessible by the GPU includes the physical address range accessible by the GPU and the size of the memory partition accessible by the GPU.
  • the memory is requested by calling the standard interface of the GPU to apply for the memory partition, which avoids requesting the memory of the CPU and reduces the waste of memory occupation.
  • decoding the image data to be displayed in the application into bitmap data, and encapsulating the bitmap data as texture data includes: decoding the image data of the first row of the image data to be displayed to generate The bitmap data of the image data of the first row is converted to the bitmap data of the image data of the first row to generate the texture data of the image data of the first row; then the second row of the image data to be displayed is executed as described above Process until the picture data of the last line of the picture data to be displayed is processed.
  • the bitmap data generated by the decoding is converted in parallel to directly obtain the texture data, thereby avoiding the conversion of the bitmap data stored in the memory partition of the CPU into texture data.
  • the process of uploading to the memory partition of the GPU can avoid the time-consuming data uploading caused by the stuck problem.
  • the picture data of the first row of the picture data to be displayed is decoded, the bitmap data of the picture data of the first row is generated, and the bitmap data of the picture data of the first row is performed
  • Data conversion to generate the texture data of the first line of picture data including: calling the decoding function pair
  • the first row of image data is decoded to generate the bitmap data corresponding to the first row of image data; the texture conversion dynamic library is called to perform data type conversion processing on the bitmap data corresponding to the first row of image data to generate the first row of image data
  • the texture conversion dynamic library includes a conversion function that converts bitmap data into texture data; then the second line of picture data of the picture data to be displayed performs the above processing until the last line of the picture data to be displayed is processed Picture data.
  • the texture data is directly generated by performing parallelized acceleration processing on the bitmap data, thereby avoiding converting the bitmap data stored in the memory partition of the CPU into texture data and uploading it to the memory partition of the GPU
  • the process of data upload can avoid the stuck problem caused by time-consuming data upload.
  • a second aspect provides an electronic device that includes: a memory and one or more processors; the memory and the processor are coupled; the memory is used to store computer program code, the computer program code includes computer instructions, when the processor executes the computer instructions When the electronic device executes the method described in the first aspect and any of its possible design methods.
  • the third aspect provides a chip system, which can be applied to electronic equipment; the system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by wires; the interface circuit is used for slave electronics
  • the memory of the device receives a signal and sends a signal to the processor.
  • the signal includes the computer instruction stored in the memory; when the processor executes the computer instruction, the electronic device executes the method described in the first aspect and any of its possible design methods .
  • a fourth aspect provides a readable storage medium.
  • the readable storage medium stores instructions.
  • the electronic device executes the first aspect and any of its possible design methods. Methods.
  • a fifth aspect provides a computer program product, which is characterized in that when the computer program product runs on a computer, the computer is caused to execute the method described in the first aspect and any one of its possible design methods.
  • any electronic device, chip system, readable storage medium, and computer program product provided above can be implemented according to the corresponding picture display method provided above, and therefore, the beneficial effects that can be achieved You can refer to the beneficial effects of the image display method provided above, which will not be repeated here.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 2 is a schematic structural diagram of a processor and memory of an electronic device provided by an embodiment of the application;
  • FIG. 3 is a software structure diagram of an electronic device provided by an embodiment of the application.
  • FIG. 4 is a processing schematic diagram of a picture display method provided by an embodiment of this application.
  • FIG. 5 is a processing schematic diagram of another picture display method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the processing flow of a picture display method provided by an embodiment of the application.
  • FIG. 7 is a software processing flowchart of a picture display method provided by an embodiment of the application.
  • FIG. 8 is a software processing flowchart of another picture display method provided by an embodiment of the application.
  • FIG. 9 is a schematic structural diagram of a chip system provided by an embodiment of the application.
  • a bitmap also called a bitmap or raster picture, is composed of individual points called pixels (picture elements). These pixels can be arranged and colored in different ways to form a picture.
  • Image data It can be a local image file to be displayed by the electronic device or a downloaded image data stream.
  • the format of the image data can be Portable Network Graphics (PNG), JPEG format (Joint Photographic Experts Group) or streaming media file (Stream) and other formats.
  • PNG Portable Network Graphics
  • JPEG Joint Photographic Experts Group
  • Stream streaming media file
  • Bitmap data The picture data in the bitmap format generated after decoding the picture data can be called bitmap data.
  • Texture data A format of image data, which is bitmap data that can represent the details of the surface of an object; specifically, it can be image data that represents the color plane pattern or uneven grooves of the image.
  • the texture data can be used as data that the GPU can recognize and perform drawing processing.
  • Rendered data The image processor GPU performs pixel rendering and pixel filling according to texture data and drawing instructions.
  • the generated data is called the rendered data.
  • the display processing module of the electronic device can perform image display processing according to the rendered data.
  • Stalling It refers to a phenomenon in electronic devices such as mobile phones and notebooks.
  • the specific situation can be the phenomenon of frame lag in the operation of various electronic devices, such as when playing games or when displaying pictures.
  • API Application Programming Interface
  • the electronic devices involved in this application can be mobile phones, tablet computers, desktops, laptops, handheld computers, notebook computers, ultra-mobile personal computers (UMPC), netbooks, and cellular phones, personal digital assistants Devices including touch screens (personal digital assistant, PDA), augmented reality (AR)/virtual reality (VR) devices, etc.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • Fig. 1 shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units. It should be understood that the processor 110 may include a central processing unit (CPU), an application processor (AP), a modem, and a graphics processing unit (graphics processing unit). , GPU), image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processor (neural-network processing unit, NPU) etc. Among them, the different processing units may be independent devices or integrated in one or more processors. As an example, the processor 110 in FIG. 1 only shows a central processing unit and a graphics processing unit.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (I2C) interfaces, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (PCM) interfaces, universal asynchronous transmitters receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART mobile industry processor interface
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display pictures, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, a picture playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc. It should be noted that in the embodiments of the present application, the internal memory 121 has the same meaning as the memory described in the embodiments of the present application, and the storage data area in the internal memory 121 may include a memory partition accessible by the CPU and a memory accessible by the GPU. Partition.
  • the CPU can be a large-scale integrated circuit, which is the computing core and control center of a computer. Its function is mainly to interpret computer instructions and process data in computer software.
  • the CPU mainly includes an arithmetic unit (Arithmetic Logic Unit, ALU), a high-speed buffer memory (cache), and a bus (bus) that realizes the data, control, and status between them. It, together with internal memory (memory) and input/output (I/O) devices, are collectively called the three core components of electronic computers.
  • ALU Arimetic Logic Unit
  • Cache high-speed buffer memory
  • bus bus
  • the GPU also known as display core, visual processor, and display chip, is a microprocessor that specializes in image computing.
  • the purpose of GPU is to convert and drive the display information required by electronic devices, and to provide line scan signals to the display to control the correct display of the display. It is an important component connecting the display and the CPU.
  • the GPU may include: a work management module for managing the GPU to execute rendering instructions issued by the CPU; a peripheral bus (Advanced Peripheral Bus, APB) module, a shader module, a rendering material, and a memory management unit (MMU) And L2 cache module.
  • APB Advanced Peripheral Bus
  • MMU memory management unit
  • L2 cache module L2 cache module.
  • the CPU sends a command to the GPU, which can be an instruction to render a picture; the GPU can interact with the CPU by sending an interrupt.
  • DDR memory Internal memory is abbreviated as memory, which is a device used to store data and programs in electronic equipment, and is a bridge for communication with CPU and GPU.
  • the memory can be Double Data Rate (DDR) memory, referred to as DDR memory. All programs in the electronic device are executed in the memory, so the performance of the memory has a great impact on the electronic device. Big.
  • the data storage area in the memory can be used to temporarily store the calculation data of the processor CPU or GPU, and the data exchanged with external storage such as hard disks. As long as the electronic device is running, the CPU or GPU will transfer the data that needs to be calculated into the memory for calculation, and then transfer the result from the memory when the calculation is completed. Therefore, as shown in Figure 2, the DDR memory includes a memory partition accessible by the GPU and a memory partition accessible by the CPU.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes a layered Android system as an example to illustrate the software structure of the electronic device 100.
  • FIG. 3 is a software structure block diagram of the electronic device 100 according to the application embodiment.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, pictures, audios, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (media libraries), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library media libraries
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D graphics.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the gallery application icon as an example, the gallery application calls the interface of the application framework layer to start the gallery application, and then starts the display driver by calling the kernel layer.
  • the display 194 presents pictures.
  • An embodiment of the present application provides a picture processing method, as shown in FIG. 4, which may include several processing processes of picture loading, picture decoding, and GPU drawing.
  • the picture to be displayed is loaded.
  • the mobile phone displays an interactive interface of an application.
  • the preset operation can include sliding, tapping, double-clicking, long-pressing, re-pressing, air gestures, moving eye focus, etc.
  • the image to be displayed needs to be loaded first to obtain the image data to be displayed.
  • the image data to be displayed may be a local image file of the mobile phone or a downloaded image data stream, and the format of the image data may be a format such as PNG, JPEG, or Stream.
  • the local picture file can be the picture file that the system reads locally stored in the mobile phone through the input/output port
  • the picture data stream can be the picture data downloaded from the network through the Uniform Resource Locator (URL).
  • URL Uniform Resource Locator
  • This part of the processing process can be the process of displaying pictures in an application in the mobile phone as shown in Figure 5.
  • the bitmap data obtained by the decoder's decoding process is not stored in the memory partition of the CPU, but directly through the texture management module to parallelize acceleration, encapsulate the bitmap data into texture data, and generate the texture
  • the data is stored in the memory partition of the GPU.
  • the GPU drawing operation can be specifically that the GPU receives the drawing instruction sent by the CPU, and draws the texture data in the GPU memory partition.
  • the generated rendered data is stored in the GPU memory for the next display processing .
  • the image processing method may include the following steps 601-604:
  • the electronic device displays an interactive interface of an application.
  • the image data of the picture to be displayed is obtained, a decoder is created, and a memory allocator is defined.
  • the memory allocator is used to apply for A memory partition accessible by the graphics processor GPU, and the memory partition accessible by the GPU is used to store texture data.
  • the decoder includes a software decoder and a hardware decoder.
  • This embodiment takes a software decoder as an example for description, and does not constitute a limitation to the present application.
  • the electronic device is a mobile phone as an example for description.
  • the mobile phone displays the interactive interface of the application. If the mobile phone detects the user's sliding operation on the interactive interface, the content of the picture displayed on the screen needs to be refreshed.
  • the mobile phone After the touch hardware of the hardware layer receives the user's trigger operation, it sends the input event to the touch input of the kernel layer, and then the input event is distributed through the read event and event distribution of the system library.
  • the read event is responsible for the touch input
  • the input event is read in, and the event distribution is responsible for distributing the input event to the window management of the application framework layer.
  • Window management is mainly used to distribute input events to different processing modules. For example, the picture display event of this application will be distributed by window management to interface management for processing.
  • Interface management passes events through events and delivers events to the corresponding interface display area ; Interface management receives the event displayed by the picture, and executes the processing shown in Figure 5 above.
  • the window management of the application framework layer receives the input event and obtains the image data, which can be understood as the window management receiving the input event of the trigger operation of the image display, and the input event is used to trigger the acquisition of the image data to be displayed.
  • the picture data to be displayed includes a picture file stored locally on the mobile phone or a picture data stream downloaded through a URL.
  • a picture file stored locally on the mobile phone or a picture data stream downloaded through a URL.
  • the mobile phone needs to display thumbnails of pictures in the gallery, which triggers the display scene of the locally stored pictures.
  • the application is WeChat
  • the user slides a finger on the screen, so that the content of the picture displayed on the display interface of WeChat needs to be changed.
  • the content of the picture can be downloaded through the URL to display the picture after the sliding operation.
  • the application When creating a software decoder, as shown in Figure 7, specifically, when the trigger operation of the picture display is obtained, the application will call the decoding module in the system library through the decoding processing interface to complete the creation of the software decoder.
  • the decoding module When the decoding module is called , The system will define the memory allocator by calling the memory management at the same time, and realize the definition of the memory allocator belonging to the decoding module. That is to say, while triggering the software decoder to decode, define the memory allocation attributes of the data generated after decoding.
  • the decoding module decodes the image data to obtain texture data. Therefore, the memory partition applied by the memory allocator defined in this application is a memory partition that stores the texture data and is accessible by the GPU.
  • the embodiment of this application adds two new classes, or two new functions, including the first function and the second function, to realize image decoding processing through the Android runtime and system library of the software structure, and memory management.
  • the two newly added functions may be TextureImageAllocator and TextureImage.
  • TextureImageAllocator can be used to apply for allocation of a memory partition accessible by the GPU; TextureImage can be used to indicate that the memory partition is used to store texture data.
  • the specific execution process of the related software can be as shown in Figure 8.
  • the CPU calls the decoding interface through the entry implemented by the bottom layer of the decoder, selects to call the texture data type memory allocator TextureImageAllocator, and then calls the class TextureImage that saves the texture data. , Start to apply for the allocation of memory partitions to the internal memory.
  • the CPU then calls the interface to start allocating memory partitions through the class that saves the texture data, and points the memory allocator of the texture data type to the corresponding memory allocation interface to start allocating the memory partition.
  • the CPU obtains the physical address of the allocated memory partition, and points the physical address of the memory partition to the memory partition storing the decoded data.
  • the electronic device allocates a memory partition accessible by the GPU.
  • the electronic device allocates memory partitions, and the memory allocator is used to call the first interface to apply for a GPU-accessible memory partition from the internal memory.
  • the first interface can apply for a standard interface for the GPU memory partition.
  • the GPU-accessible memory partition includes the GPU accessible The physical address range and the size of the memory partition accessible by the GPU.
  • the specific process of allocating memory partitions can be that the system calls the GPU's memory application interface.
  • This interface is defined by the system package and can be used to apply for and respond to the allocation of memory to realize the allocation of GPU memory.
  • What is obtained after the application is the DDR address space that the GPU is authorized to access, and the response data may include pointer data for storing memory addresses.
  • This application avoids the problem of memory waste caused by allocating two blocks of memory in the prior art.
  • the memory management calls the GPU's standard memory application interface to apply for a memory partition from the DDR memory, and the DDR memory feeds back to the GPU the size of the allocated memory partition, and the memory pointer used to indicate the physical address of the memory partition.
  • the storage data type is texture data, etc.
  • the electronic device decodes the picture data to be displayed into bitmap data, encapsulates the bitmap data as texture data, and stores the texture data in a memory partition accessible by the GPU.
  • the image data is decoded line by line to generate bitmap data.
  • the system directly performs data type conversion processing on the generated bitmap data line by line to generate texture data, which is stored in the GPU accessible memory partition applied for in step 602.
  • libtexture dynamic library which can convert bitmap data into texture data.
  • the libtexture dynamic library of the embodiment of the application can dynamically generate a conversion function based on the data type between the generated bitmap data and the texture data, and the conversion function is implemented through the libtexture dynamic library, using parallel processing and acceleration library methods, directly Generate texture data.
  • the specific execution flow of the related software can be as shown in Figure 8.
  • the CPU calls the decoding function (for example, libjpeg decoding function), starts scanning and decoding by line, and generates bitmap data after decoding the image data by line; then the CPU calls the texture For conversion (for example, libtexture dynamic library), the bitmap data is converted into texture data through parallelized acceleration processing by line, and the generated texture data is stored in the requested memory space.
  • the CPU executes the next line of processing according to the above process.
  • the CPU calls the decoding function libjpeg to decode the first line of image data to generate the bitmap data corresponding to the first line of image data; the CPU calls the texture conversion dynamic library libtexture to perform the bitmap data corresponding to the first line of image data Perform data type conversion processing to generate texture data corresponding to the first row of image data.
  • the texture conversion dynamic library includes a conversion function that converts bitmap data to texture data; then the CPU performs the above processing on the second row of image data to be displayed , Until the picture data of the last line of picture data to be displayed is processed.
  • the embodiment of the application expands the function and algorithm of encapsulating texture by using the basic functions of the existing decoder. Specifically, on the basis of the decoding function of libjpeg, based on the existing decoding capability, the libtexture dynamic library is added to decode the generated libjpeg The data generates texture data through parallel decoding processing and data type conversion, thereby solving the time-consuming process of texture conversion and uploading in the prior art and possibly causing the stall phenomenon, and improving the user experience.
  • the electronic device triggers the GPU to read the texture data to perform rendering processing to obtain the rendered data; store the rendered data in the memory partition of the display, and trigger the display to display pictures according to the rendered data.
  • the GPU may perform drawing processing according to the texture data and drawing instructions to obtain the rendered data.
  • the rendered data saves the rendered data to the memory of the GPU.
  • the data synthesis (SurfaceFlinger) of the display processing is obtained into the memory of the GPU, through the compositor hardware abstraction layer and display driver, It is stored in the memory of a liquid crystal display (LCD) and displayed by the liquid crystal display LCD.
  • LCD liquid crystal display
  • the embodiment of the application adds and improves the application framework layer and system library of the system to realize the direct parallel processing of image data after decoding, generate texture data, and store the texture data in the memory partition applied for the GPU, so that
  • the GPU performs rendering processing based on the texture data, which solves the problem of the need to apply for two pieces of memory for the CPU and GPU in the prior art, and at the same time solves the problem of copying the data from the CPU memory to the GPU memory in the process of uploading textures in the prior art Time-consuming problems, improve the stuttering phenomenon, and enhance the user experience.
  • an electronic device which may include a memory and one or more processors, and the memory and the processor are coupled.
  • the memory is used to store computer program code, and the computer program code includes computer instructions.
  • the processor executes the computer instructions, the electronic device can execute various functions or steps in the foregoing method embodiments.
  • the structure of the electronic device can refer to the structure of the electronic device 100 shown in FIG. 1.
  • the embodiment of the present application also provides a chip system, which can be applied to the electronic device in the foregoing embodiment.
  • the chip system includes at least one processor 901 and at least one interface circuit 902.
  • the processor 901 and the interface circuit 902 may be interconnected by wires.
  • the interface circuit 902 may be used to receive signals from other devices (such as the memory of an electronic device).
  • the interface circuit 902 may be used to send signals to other devices (for example, the processor 901).
  • the interface circuit 902 can read an instruction stored in the memory, and send the instruction to the processor 901.
  • the electronic device can be made to execute various functions or steps executed by the electronic device in the foregoing embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • the embodiments of the present application also provide a computer storage medium, the computer storage medium includes computer instructions, when the computer instructions run on the above-mentioned electronic device, the electronic device is caused to execute each function or step performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute each function or step performed by the mobile phone in the above method embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate.
  • the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of software products, which are stored in a storage medium.
  • a device which may be a single-chip microcomputer, a chip, etc.
  • a processor processor
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请提供一种图片处理方法及装置,涉及终端系统优化领域,解决了现有技术中图片显示的处理过程中的内存浪费问题,和位图数据转换成纹理数据上传到GPU的内存中耗时大造成的卡顿问题。该方法包括:显示一个应用的交互界面,当检测到用户作用于该交互界面的预设操作时,将应用待显示的图片数据解码为位图数据,并将位图数据封装为纹理数据;将纹理数据存储于图形处理器GPU可访问的内存分区中;触发GPU读取纹理数据执行绘制处理,得到渲染后的数据;触发显示器根据渲染后的数据显示图片。

Description

一种图片处理方法及装置
本申请要求于2019年07月19日提交国家知识产权局、申请号为201910657456.8、申请名称为“一种图片处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种图片处理方法及装置。
背景技术
移动设备显示应用程序(Application,APP)界面上的图片之前,该移动设备的中央处理器(Central Processing Unit,CPU)需要先将该图片解码成位图数据,并将该位图数据保存在属于CPU的内存分区中,然后CPU将属于CPU内存分区中的位图数据转换成纹理数据,并将该纹理数据传输到属于图形处理器(Graphics Processing Unit,GPU)的内存分区中,最后通过GPU将该纹理数据绘制出来,显示在移动设备的显示屏上。
可以理解的是,APP上的图片显示过程需要占用CPU的一块内存用于存储数据处理过程中解码生成的位图数据,还需要占用GPU的一块内存用于存储将位图数据转换后的纹理数据。可见,上述技术方案需要给CPU分配一块内存,并给GPU分配一块内存,内存占用较为浪费;再者,将位图数据转换成纹理数据上传到GPU的内存中,耗时比较大,在图片显示过程中易出现卡顿的现象,用户查看图片不流畅,导致用户体验较差。
发明内容
本申请提供一种图片处理方法及装置,解决了现有技术中图片显示的处理过程中内存占用的浪费问题,和位图数据转换成纹理数据上传到GPU的内存中耗时大造成的卡顿问题。
为达到上述目的,本申请采用如下技术方案:
第一方面提供一种图片处理方法,该方法可以应用于电子设备。该方法可以包括:电子设备显示一个应用的交互界面,当检测到用户作用于该交互界面的预设操作,同时应用基于输入事件,开始获取待显示的图片数据,将应用待显示的图片数据解码为位图数据,并将位图数据封装为纹理数据;将纹理数据存储于图形处理器GPU可访问的内存分区中;触发GPU读取纹理数据执行绘制处理,得到渲染后的数据;触发显示器根据渲染数据渲染后的数据显示图片。应理解:预设操作包括但不限于滑动,点击,双击,长按,重按,隔空手势,视线焦点移动等。
现有技术中,内存分配器先为CPU申请内存分区,用于存储供CPU访问的位图数据,而后再为GPU申请内存分区,用于存储供GPU访问的纹理数据。而在本申请实施例中,电子设备定义的内存分配器,用于申请存储纹理数据的、供图形处理器GPU 可访问的内存分区,在对图片数据进行解码时,将图片数据解码后生成的纹理数据存储在该内存分区中,这样一来,不需要申请供CPU访问的内存分区,可以解决现有技术中图片显示的处理过程中内存占用的浪费问题;并且,本申请对图片数据进行解码的过程,也不需要将位图数据转换成纹理数据再将数据从CPU的内存上传到GPU的内存,可以解决数据拷贝耗时大而造成的卡顿问题,提升用户体验。
在一种可能的设计方式中,在将应用的待显示的图片数据解码为位图数据之前,该方法还包括:创建软件解码器;定义内存分配器,内存分配器用于申请GPU可访问的内存分区,GPU可访问的内存分区用于存储纹理数据。上述可能的实现方式中,通过改进内存分配器,定义GPU可访问的内存分区,且该内存分区用于存储解码生成的纹理数据,不需要申请CPU可访问的内存,减少了内存占用的浪费。
在一种可能的设计方式中,内存分配器用于申请GPU可访问的内存分区包括:内存分配器用于调用第一接口向内部存储器申请GPU可访问的内存分区,第一接口为GPU申请内存分区的标准接口,GPU可访问的内存分区包括GPU可访问的物理地址范围和GPU可访问的内存分区的大小。上述可能的实现方式中,通过调用GPU申请内存分区的标准接口来申请内存,避免了申请CPU的内存,减少了内存占用的浪费。
在一种可能的设计方式中,将应用的待显示的图片数据解码为位图数据,并将位图数据封装为纹理数据包括:对待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对待显示的图片数据的第二行数据执行上述处理,直至处理完待显示的图片数据的最后一行的图片数据。上述可能的实现方式中,通过对图片数据进行解码,对解码生成的位图数据并行地转换处理,直接得到纹理数据,从而避免了将CPU的内存分区中存储的位图数据转换成纹理数据再上传到GPU的内存分区中的过程,从而可以避免数据上传耗时大造成的卡顿问题。
在一种可能的设计方式中,对待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的所述位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对待显示的图片数据的第二行数据执行上述处理,直至处理完待显示的图片数据的最后一行的图片数据,包括:调用解码函数对第一行图片数据进行解码处理,生成第一行图片数据对应的位图数据;调用纹理转换动态库,对第一行图片数据对应的位图数据进行数据类型转换处理,生成第一行图片数据对应的纹理数据,纹理转换动态库包括将位图数据转换为纹理数据的转换函数;然后对待显示的图片数据的第二行图片数据执行上述处理,直至处理完待显示的图片数据的最后一行的图片数据。上述可能的实现方式中,通过对位图数据进行并行化的加速处理,直接生成纹理数据,从而避免了将CPU的内存分区中存储的位图数据转换成纹理数据再上传到GPU的内存分区中的过程,从而可以避免数据上传耗时大造成的卡顿问题。
第二方面提供一种电子设备,该电子设备包括:存储器和一个或多个处理器;存储器和处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器执行计算机指令时,电子设备执行第一方面及其任一种可能的设计方式所述的方法。
第三方面提供一种芯片系统,该芯片系统可以应用于电子设备;该系统包括一个或多个接口电路和一个或多个处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,该信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,电子设备执行第一方面及其任一种可能的设计方式所述的方法。
第四方面提供一种可读存储介质,可读存储介质中存储有指令,当可读存储介质在电子设备上运行时,使得电子设备执行第一方面及其任一种可能的设计方式所述的方法。
第五方面提供一种计算机程序产品,其特征在于,当计算机程序产品在计算机上运行时,使得计算机执行第一方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的任一种电子设备、芯片系统、可读存储介质和计算机程序产品,均可以根据上文所提供的对应的图片显示方法来实现,因此,其所能达到的有益效果可参考上文所提供的图片显示方法的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的一种电子设备处理器和内存的结构示意图;
图3为本申请实施例提供的一种电子设备的软件结构图;
图4为本申请实施例提供的一种图片显示方法的处理示意图;
图5为本申请实施例提供的另一种图片显示方法的处理示意图;
图6为本申请实施例提供的一种图片显示方法的处理流程示意图;
图7为本申请实施例提供的一种图片显示方法的软件处理流程图;
图8为本申请实施例提供的另一种图片显示方法的软件处理流程图;
图9为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
在介绍本申请的实施例之前,先对技术方案中设计的相关术语做简单介绍。
位图(bitmap),也称为点阵图片或栅格图片,是由称作像素(图片元素)的单个点组成的。这些像素点可以进行不同的排列和染色以构成图片。
图片数据:可以为电子设备待显示的本地图片文件或下载得到的图片数据流,图片数据的格式可以为便携式网络图形(Portable Network Graphics,PNG)、JPEG格式(Joint Photographic Experts Group)或流媒体文件(Stream)等格式。
位图数据:将图片数据进行解码处理后生成的位图格式的图片数据即可称为位图数据。
纹理(texture)数据:一种图片数据的格式,是可以表示物体表面细节的位图数据;具体可以为表示图片的彩色平面花纹或凹凸不平的沟纹的图片数据。纹理数据可以作为GPU可以识别并进行绘制处理的数据。
渲染后的数据:图像处理器GPU根据纹理数据和绘制指令进行像素渲染和像素填充过程,生成的数据被称为渲染后的数据。电子设备的显示处理模块可以根据渲染后的数据,进行图像显示处理。
卡顿:是指出现在手机、笔记本等电子设备中的一种现象,具体状况可以为各种 电子设备在进行操作过程中,例如玩游戏的时候、或者显示图片的时候的画面滞帧现象。
应用程序编程接口(Application Programming Interface,API):是一些预先定义的函数,目的是提供应用程序,和开发人员基于某软件或硬件得以访问一组例程的能力,而又无需访问源码,或理解内部工作机制的细节。
本申请中涉及的电子设备可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备等包括触摸屏的设备。
如图1示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,应理解:处理器110可以包括中央处理器(Central Processing Unit,CPU),应用处理器(application processor,AP),调制解调器,图形处理器(graphics processing unit,GPU),图片信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。作为举例,图1中处理器110中只示出了中央处理器和图形处理器。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal  asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图片处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图片,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图片播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。需要说明的是,在本申请实施例中,内部存储器121与本申请实施例中描述的内存意义相同,该内部存储器121中的存储数据区可以包括CPU可以访问的内存分区和GPU可以访问的内存分区。
接下来介绍该电子设备100包括的中央处理器CPU、图形处理器GPU与内存的硬件连接关系,以及三者分别的作用。如图2所示,CPU可以是一块大规模的集成电路,是一台计算机的运算核心和控制中心。它的功能主要是解释计算机指令以及处理计算机软件中的数据。
CPU主要包括运算器(算术逻辑运算单元,Arithmetic Logic Unit,ALU)和高速缓冲存储器(cache)及实现它们之间联系的数据(data)、控制及状态的总线(bus)。它与内部存储器(memory)和输入/输出(I/O)设备合称为电子计算机三大核心部件。
GPU又称显示核心、视觉处理器、显示芯片,是一种专门处理图片运算工作的微 处理器。GPU的用途是将电子设备所需要的显示信息进行转换驱动,并向显示器提供行扫描信号,控制显示器的正确显示,是连接显示器和CPU的重要元件。如图2所示,GPU可以包括:工作管理模块,用于管理GPU执行CPU下发的渲染指令;外围总线(Advanced Peripheral Bus,APB)模块,着色器模块,渲染材质,内存管理单元(MMU)和L2缓存模块。CPU向GPU发送命令,该命令具体可以为渲染图片的指令;GPU可以通过发送中断与CPU进行数据交互。
内部存储器简称为内存,是电子设备中用于存储数据和程序的器件,是与CPU和GPU进行沟通的桥梁。如图2所示,内存可以为双倍速率(Double Data Rate,DDR)存储器,简称DDR内存,电子设备中所有程序的运行都是在内存中进行的,因此内存的性能对电子设备的影响非常大。内存中的存储数据区可以用于暂时存放处理器CPU或GPU的运算数据,以及与硬盘等外部存储器交换的数据。只要电子设备在运行中,CPU或GPU就会把需要运算的数据调到内存中进行运算,当运算完成后再将结果从内存中传送出来。因此,如图2所示,DDR内存中包括有GPU可访问的内存分区和CPU可访问的内存分区。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是申请实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图片,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2维图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2维和3维图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图片文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图片渲染,合成,和图层处理等。
2维图形引擎是2维绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获到触发图片显示的场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为图库应用图标的控件为例,图库应用调用应用框架层的接口,启动图库应用,进而通过调用内核层启动显示驱动,通过显示屏194呈现图片。
本申请实施例提供一种图片处理方法,如图4,可以包括图片加载、图片解码和GPU绘制这几个处理过程。
首先对待显示的图片进行图片加载过程。手机显示一个应用的交互界面,当检测到用户作用于该交互界面的预设操作时,例如预设操作可以包括滑动,点击,双击,长按,重按,隔空手势,视线焦点移动等,这时,首先需要对待显示的图片加载处理,得到待显示的图片数据。其中,待显示的图片数据可以为手机本地图片文件或下载得到的图片数据流,图片数据的格式可以为PNG、JPEG或Stream等格式。其中,本地图片文件可以为系统通过输入\输出口读取手机本地存储的图片文件,图片数据流可以为通过统一资源定位符(Uniform Resource Locator,URL)从网络下载的图片数据。
接着,进行图片解码处理。该部分处理过程可以为如图5所示的手机中的一个应 用程序显示图片的过程。该应用程序需要显示图片时,将解码器解码处理得到的位图数据不是存储于CPU的内存分区,而是直接通过纹理管理模块并行化加速,将位图数据封装为纹理数据,将生成的纹理数据存储于GPU的内存分区中。从而节省了现有技术图片显示中位图解码占用的CPU的内存空间,且消除了上传纹理数据而产生的耗时问题,使得用户查看图片流畅,提升用户的使用体验。
最后,GPU绘制操作具体可以为,GPU接收到CPU发送的绘制指令,对GPU内存分区中的纹理数据进行绘制处理,生成的渲染后的数据存储在GPU的内存中,用于下一步的显示处理。
在了解本申请的原理的基础上,下面对本申请的技术方案进行进一步阐述。
如图6所示,该图片处理方法可以包括如下步骤601-604:
601:电子设备显示一个应用的交互界面,当检测到用户作用于该交互界面的预设操作时,获取待显示图片的图片数据,创建解码器,定义内存分配器,该内存分配器用于申请供图形处理器GPU可访问的内存分区,该GPU可访问的内存分区用于存储纹理数据。
需要说明的是,该解码器包括软件解码器和硬件解码器,该实施例以软件解码器为例进行说明,并不构成对本申请的限制。
本申请实施例以电子设备为手机为例进行说明。
示例性的,当某一应用运行时,手机显示该应用的交互界面,如果手机检测到用户作用于交互界面的滑动操作时,屏幕中显示的图片内容需要刷新,如图7所示,手机的硬件层的触控硬件接收到用户的触发操作后,将该输入事件发送给内核层的触控输入,而后该输入事件经过系统库的读取事件和事件分发,读取事件负责从触控输入中读取输入事件,事件分发负责将输入事件分发到应用程序框架层的窗口管理。窗口管理主要用于将输入事件分发给不同的处理模块,例如,本申请的图片显示事件,将被窗口管理分发给界面管理进行处理,界面管理通过事件传递,将事件传递给相应的界面显示区域;界面管理接收到该图片显示的事件,执行上述图5所示的处理过程。这时,应用程序框架层的窗口管理会接收到输入事件,获取图片数据,可以理解为窗口管理接收到图片显示的触发操作的输入事件,该输入事件用于触发获取待显示图片数据。
其中,待显示图片数据包括手机本地存储的图片文件或通过URL下载的图片数据流。例如,用户点击手机上的某一应用“图库”,手机需要显示图库中图片的缩略图,则触发了本地存储图片的显示场景。再例如,应用程序为微信时,用户通过手指在屏幕上滑动,使得微信的显示界面中显示的图片内容需改变,该图片内容可以通过URL下载图片数据流,以显示滑动操作后的图片。
创建软件解码器时,如图7,具体为,在获取到图片显示的触发操作的时候,该应用会通过解码处理的接口调用系统库中的解码模块完成创建软件解码器,在调用解码模块时,系统会同时通过调用内存管理定义内存分配器,实现定义隶属于解码模块的内存分配器。也就是说,在触发软件解码器解码的同时,定义解码后生成的数据的内存分配属性。在本申请实施例中,解码模块对图片数据解码后得到的是纹理数据,因此本申请定义的内存分配器申请到的内存分区为存放该纹理数据的、GPU可以访问 的内存分区。
本申请实施例通过在软件结构的安卓运行时(Android runtime)和系统库,内存管理新增加两个类,或者说新增了两个函数,包括第一函数和第二函数,实现图片解码处理中的内存分配和管理的过程。示例性的,新增的两个函数可以为TextureImageAllocator,和TextureImage,其中,TextureImageAllocator可以用于实现申请分配GPU可访问的内存分区;TextureImage可以用于指示该内存分区用于存放纹理数据。
示例性的,相关的软件具体执行流程可以为如图8所示,CPU通过解码器底层实现的入口调用解码的接口,选择调用纹理数据类型的内存分配器TextureImageAllocator,而后调用保存纹理数据的类TextureImage,开始向内部存储器申请分配内存分区。CPU再通过保存纹理数据的类调用开始分配内存分区的接口,并将纹理数据类型的内存分配器指向对应的内存分配的接口,开始分配内存分区。内部存储器分配好内存分区后,CPU获取分配好的内存分区的物理地址,将该内存分区的物理地址指向存储解码后数据的内存分区。
602:电子设备分配GPU可访问的内存分区。
电子设备分配内存分区,内存分配器用于调用第一接口向内部存储器申请GPU可访问的内存分区,该第一接口可以为GPU申请内存分区的标准接口,该GPU可访问的内存分区包括GPU可访问的物理地址范围和该GPU可访问的内存分区的大小。
具体的分配内存分区的过程可以为,系统调用GPU的申请内存的接口,这个接口是系统封装定义好的,可以用于申请和响应内存的分配,实现GPU的内存的分配。申请之后获取到的是授权GPU访问的DDR地址空间,响应数据可以包括用于存储内存地址的指针数据。本申请避免了现有技术中分配两块内存所致的内存浪费的问题。
例如,内存管理调用GPU的标准的申请内存的接口,向DDR内存申请内存分区,DDR内存向GPU反馈分配的内存分区的大小、用于指示该内存分区的物理地址的内存指针,该内存分区的存放数据类型为纹理数据等。
603:电子设备将待显示的图片数据解码为位图数据,将位图数据封装为纹理数据,将纹理数据存储于GPU可访问的内存分区中。
对图片数据按行进行解码处理,生成位图数据,系统直接将生成的该位图数据按行进行数据类型的转换处理,生成纹理数据,存储在步骤602申请到的GPU可访问的内存分区。
本申请实施例通过在安卓运行和系统库层新增了一个并行转换纹理数据的库,记为纹理转换libtexture动态库,该动态库可以将位图数据转换为纹理数据。本申请实施例libtexture动态库可以动态的根据生成的位图数据和纹理数据之间的数据类型生成转换函数,将该转换函数通过libtexture动态库来实现,采用并行化处理和加速库的方式,直接生成纹理数据。
相关的软件具体执行流程可以为如图8所示,CPU调用解码函数(例如,libjpeg解码函数),开始按行扫描解码,将图片数据按行完成解码处理后生成位图数据;然后CPU调用纹理转换(例如,libtexture动态库),按行通过并行化加速处理将位图数据转换成纹理数据,将生成的纹理数据存储到申请的内存空间中,CPU接着按照上述流 程执行下一行的处理。
示例性的,CPU调用解码函数libjpeg对第一行图片数据进行解码处理,生成第一行图片数据对应的位图数据;CPU调用纹理转换动态库libtexture,对第一行图片数据对应的位图数据进行数据类型转换处理,生成第一行图片数据对应的纹理数据,纹理转换动态库包括将位图数据转换为纹理数据的转换函数;然后CPU对待显示的图片数据的第二行图片数据执行上述处理,直至处理完待显示的图片数据的最后一行的图片数据。
本申请实施例通过利用现有解码器的基础功能,扩展出封装纹理的功能和算法,具体为在libjpeg解码功能的基础上,基于现有的解码能力,增加libtexture动态库,将libjpeg解码生成的数据通过并行的解码处理和数据类型转换,生成纹理数据,从而解决现有技术中纹理转换和上传过程耗时大而可能造成的卡顿现象,提升用户的使用体验。
604:电子设备触发GPU读取纹理数据执行绘制处理,得到渲染后的数据;将渲染后的数据存储于显示器的内存分区中,触发显示器根据渲染后的数据显示图片。
进一步的,GPU可以根据纹理数据和绘制指令执行绘制处理,得到渲染后的数据。如图7所示,渲染后的数据将该渲染后的数据保存到GPU的内存中,显示处理的数据合成(SurfaceFlinger)获取到GPU的这块内存后,通过合成器硬件抽象层和显示驱动,存储于液晶显示器(Liquid Crystal Display,LCD)的内存中,由液晶显示器LCD进行显示。
本申请实施例通过在系统的应用程序框架层和系统库进行新增和改进,实现了图片数据解码后直接并行化处理,生成纹理数据,并将纹理数据存储在为GPU申请的内存分区,以便GPU根据纹理数据进行绘制处理,解决了现有技术中需要为CPU和GPU申请两块内存的浪费问题,同时解决了现有技术在上传纹理的过程中,数据从CPU的内存拷贝到GPU的内存的耗时问题,改善卡顿现象,提升用户的使用体验。
本申请另一些实施例提供了一种电子设备,该电子设备可以包括:存储器和一个或多个处理器,该存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中的各个功能或者步骤。该电子设备的结构可以参考图1所示的电子设备100的结构。
本申请实施例还提供一种芯片系统,该芯片系统可以应用于上述实施例中的电子设备,如图9所示,该芯片系统包括至少一个处理器901和至少一个接口电路902。处理器901和接口电路902可通过线路互联。例如,接口电路902可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路902可用于向其它装置(例如处理器901)发送信号。示例性的,接口电路902可读取存储器中存储的指令,并将该指令发送给处理器901。当所述指令被处理器901执行时,可使得电子设备执行上述实施例中电子设备执行的各个功能或者步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中 手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种图片处理方法,其特征在于,应用于电子设备,所述方法包括:
    显示一个应用的交互界面,当检测到用户作用于所述交互界面的预设操作时,将所述应用待显示的图片数据解码为位图数据,并将所述位图数据封装为纹理数据;
    将所述纹理数据存储于图形处理器GPU可访问的内存分区中;
    触发所述GPU读取所述纹理数据执行绘制处理,得到渲染后的数据;
    触发显示器根据所述渲染后的数据显示图片。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述应用的待显示的图片数据解码为位图数据之前,所述方法还包括:
    创建软件解码器;
    定义内存分配器,所述内存分配器用于申请所述GPU可访问的内存分区,所述GPU可访问的内存分区用于存储所述纹理数据。
  3. 根据权利要求2所述的方法,其特征在于,所述内存分配器用于申请所述GPU可访问的内存分区包括:
    所述内存分配器用于调用第一接口向内部存储器申请所述GPU可访问的内存分区,所述第一接口为所述GPU申请内存分区的标准接口,所述GPU可访问的内存分区包括所述GPU可访问的物理地址范围和所述GPU可访问的内存分区的大小。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述将所述应用待显示的图片数据解码为位图数据,并将所述位图数据封装为纹理数据包括:
    对所述待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对所述待显示的图片数据的第二行数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据。
  5. 根据权利要求4所述的方法,其特征在于,对所述待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对所述待显示的图片数据的第二行数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据,包括:
    调用解码函数对所述第一行图片数据进行解码处理,生成所述第一行图片数据对应的位图数据;
    调用纹理转换动态库,对所述第一行图片数据对应的位图数据进行数据类型转换处理,生成所述第一行图片数据对应的纹理数据,所述纹理转换动态库包括将位图数据转换为纹理数据的转换函数;
    然后对所述待显示的图片数据的第二行图片数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据。
  6. 一种电子设备,其特征在于,所述电子设备包括:存储器和一个或多个处理器;所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述电子设备执行如下操作:
    显示一个应用的交互界面,当检测到用户作用于所述交互界面的预设操作时,将所述应用待显示的图片数据解码为位图数据,并将所述位图数据封装为纹理数据;
    将所述纹理数据存储于图形处理器GPU可访问的内存分区中;
    触发所述GPU读取所述纹理数据执行绘制处理,得到渲染后的数据;
    触发显示器根据所述渲染后的数据显示图片。
  7. 根据权利要求6所述的电子设备,其特征在于,所述电子设备还用于执行如下操作:
    创建软件解码器;
    定义内存分配器,所述内存分配器用于申请所述GPU可访问的内存分区,所述GPU可访问的内存分区用于存储所述纹理数据。
  8. 根据权利要求7所述的电子设备,其特征在于,所述内存分配器用于申请所述GPU可访问的内存分区,具体包括:
    所述内存分配器用于调用第一接口向内部存储器申请所述GPU可访问的内存分区,所述第一接口为所述GPU申请内存分区的标准接口,所述GPU可访问的内存分区包括所述GPU可访问的物理地址范围和所述GPU可访问的内存分区的大小。
  9. 根据权利要求6-8任一所述的电子设备,其特征在于,
    所述将所述应用待显示的图片数据解码为位图数据,并将所述位图数据封装为纹理数据包括:
    对所述待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对所述待显示的图片数据的第二行数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据。
  10. 根据权利要求9所述的电子设备,其特征在于,
    对所述待显示的图片数据的第一行的图片数据进行解码,生成第一行的图片数据的位图数据,对第一行的图片数据的位图数据进行数据转换,生成第一行的图片数据的纹理数据;然后对所述待显示的图片数据的第二行数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据,包括:
    调用解码函数对所述第一行图片数据进行解码处理,生成所述第一行图片数据对应的位图数据;
    调用纹理转换动态库,对所述第一行图片数据对应的位图数据进行数据类型转换处理,生成所述第一行图片数据对应的纹理数据,所述纹理转换动态库包括将位图数据转换为纹理数据的转换函数;
    然后对所述待显示的图片数据的第二行图片数据执行上述处理,直至处理完所述待显示的图片数据的最后一行的图片数据。
  11. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备;所述芯片系统包括一个或多个接口电路和一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-5中任一项所述的图片处理方法。
  12. 一种可读存储介质,其特征在于,所述可读存储介质中存储有指令,当所述可读存储介质在电子设备上运行时,使得所述电子设备执行权利要求1-5任一项所述的图片处理方法。
  13. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行权利要求1-5任一项所述的图片处理方法。
PCT/CN2020/102190 2019-07-19 2020-07-15 一种图片处理方法及装置 WO2021013019A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20844804.3A EP3971818A4 (en) 2019-07-19 2020-07-15 IMAGE PROCESSING METHOD AND APPARATUS
US17/627,951 US20220292628A1 (en) 2019-07-19 2020-07-15 Image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910657456.8 2019-07-19
CN201910657456.8A CN112241932A (zh) 2019-07-19 2019-07-19 一种图片处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021013019A1 true WO2021013019A1 (zh) 2021-01-28

Family

ID=74168375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102190 WO2021013019A1 (zh) 2019-07-19 2020-07-15 一种图片处理方法及装置

Country Status (4)

Country Link
US (1) US20220292628A1 (zh)
EP (1) EP3971818A4 (zh)
CN (1) CN112241932A (zh)
WO (1) WO2021013019A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562022A (zh) * 2021-01-25 2021-03-26 翱捷科技股份有限公司 一种电子设备的图片显示方法及系统
CN113051072A (zh) * 2021-03-02 2021-06-29 长沙景嘉微电子股份有限公司 内存管理方法,装置,系统及计算机可读存储介质
CN113326085B (zh) * 2021-05-18 2022-08-23 翱捷科技股份有限公司 一种基于lvgl的jpeg格式图片显示方法及系统
CN114327900B (zh) * 2021-12-30 2024-07-23 四川启睿克科技有限公司 一种管理双缓冲技术中线程调用防止内存泄漏的方法
CN114745570B (zh) * 2022-06-09 2022-11-11 荣耀终端有限公司 图像的渲染方法、电子设备及存储介质
CN115134661B (zh) * 2022-06-28 2024-08-27 龙芯中科(合肥)技术有限公司 视频处理方法及视频处理应用
CN116055540B (zh) * 2023-01-13 2023-12-19 北京曼恒数字技术有限公司 虚拟内容的显示系统、方法、设备及计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520900A (zh) * 2009-03-30 2009-09-02 中国人民解放军第三军医大学第一附属医院 一种利用gpu加速cr/dr/ct图像显示及图像处理的方法及专用设备
CN102547058A (zh) * 2011-12-31 2012-07-04 福建星网视易信息系统有限公司 Jpeg图像处理方法以及系统
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
CN104866318A (zh) * 2015-06-05 2015-08-26 北京金山安全软件有限公司 一种多窗口中标签页的展示方法及装置
CN105096367A (zh) * 2014-04-30 2015-11-25 广州市动景计算机科技有限公司 优化Canvas绘制性能的方法及装置
CN105427236A (zh) * 2015-12-18 2016-03-23 魅族科技(中国)有限公司 一种图像渲染方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635004B (zh) * 2009-08-18 2011-07-13 中兴通讯股份有限公司 基于终端的颜色匹配方法及装置
CN103605534B (zh) * 2013-10-31 2017-04-05 优视科技有限公司 图片加载方法及装置
US9286657B1 (en) * 2015-06-25 2016-03-15 Mylio, LLC Efficient image processing using dynamically sized tiles
CN105678680A (zh) * 2015-12-30 2016-06-15 魅族科技(中国)有限公司 一种图像处理的方法和装置
CN106127673B (zh) * 2016-07-19 2019-02-12 腾讯科技(深圳)有限公司 一种视频处理方法、装置及计算机设备
US10565354B2 (en) * 2017-04-07 2020-02-18 Intel Corporation Apparatus and method for protecting content in virtualized and graphics environments
US12086705B2 (en) * 2017-12-29 2024-09-10 Intel Corporation Compute optimization mechanism for deep neural networks
US20190188386A1 (en) * 2018-12-27 2019-06-20 Intel Corporation Protecting ai payloads running in gpu against main cpu residing adversaries

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520900A (zh) * 2009-03-30 2009-09-02 中国人民解放军第三军医大学第一附属医院 一种利用gpu加速cr/dr/ct图像显示及图像处理的方法及专用设备
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
CN102547058A (zh) * 2011-12-31 2012-07-04 福建星网视易信息系统有限公司 Jpeg图像处理方法以及系统
CN105096367A (zh) * 2014-04-30 2015-11-25 广州市动景计算机科技有限公司 优化Canvas绘制性能的方法及装置
CN104866318A (zh) * 2015-06-05 2015-08-26 北京金山安全软件有限公司 一种多窗口中标签页的展示方法及装置
CN105427236A (zh) * 2015-12-18 2016-03-23 魅族科技(中国)有限公司 一种图像渲染方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3971818A4

Also Published As

Publication number Publication date
US20220292628A1 (en) 2022-09-15
EP3971818A1 (en) 2022-03-23
CN112241932A (zh) 2021-01-19
EP3971818A4 (en) 2022-08-10

Similar Documents

Publication Publication Date Title
WO2021013019A1 (zh) 一种图片处理方法及装置
WO2021057830A1 (zh) 一种信息处理方法及电子设备
US9715750B2 (en) System and method for layering using tile-based renderers
WO2020156264A1 (zh) 渲染方法及装置
WO2021129253A1 (zh) 显示多窗口的方法、电子设备和系统
TWI696952B (zh) 資源處理方法及裝置
JP2013542515A (ja) 異環境間リダイレクション
JP2013546043A (ja) 即時リモートレンダリング
CN114972607B (zh) 加速图像显示的数据传输方法、装置及介质
KR20120087967A (ko) 응용 프로그램 화상의 표시 방법 및 장치
CN115629884B (zh) 一种线程调度方法、电子设备及存储介质
WO2024046084A1 (zh) 一种用户界面显示方法及相关装置
WO2023143280A1 (zh) 渲染图像的方法和相关装置
WO2021169379A1 (zh) 权限复用方法、基于权限复用的资源访问方法及相关设备
US20130067502A1 (en) Atlasing and Virtual Surfaces
CN115904563A (zh) 应用程序启动中的数据处理方法、装置和存储介质
WO2021052488A1 (zh) 一种信息处理方法及电子设备
US10678553B2 (en) Pro-active GPU hardware bootup
WO2021253922A1 (zh) 字体切换方法及电子设备
WO2023284625A1 (zh) 应用的跨平台显示方法、可读介质和电子设备
WO2013185664A1 (zh) 一种操作方法及装置
US20240290241A1 (en) Display drive method based on frame data, electronic device, and storage medium
WO2023280141A1 (zh) 刷新用户界面的方法和电子设备
US20240212635A1 (en) Method for adjusting display screen brightness, electronic device, and storage medium
WO2024055867A1 (zh) 基于应用分身的界面显示方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844804

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020844804

Country of ref document: EP

Effective date: 20211217

NENP Non-entry into the national phase

Ref country code: DE