WO2019071600A1 - Procédé et appareil de traitement d'image - Google Patents

Procédé et appareil de traitement d'image Download PDF

Info

Publication number
WO2019071600A1
WO2019071600A1 PCT/CN2017/106153 CN2017106153W WO2019071600A1 WO 2019071600 A1 WO2019071600 A1 WO 2019071600A1 CN 2017106153 W CN2017106153 W CN 2017106153W WO 2019071600 A1 WO2019071600 A1 WO 2019071600A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
image frame
command queue
image processing
unit
Prior art date
Application number
PCT/CN2017/106153
Other languages
English (en)
Chinese (zh)
Inventor
党茂昌
符玉襄
周喜渝
孟坤
蒋铭辉
陈国栋
余先宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780066411.2A priority Critical patent/CN109891388A/zh
Priority to PCT/CN2017/106153 priority patent/WO2019071600A1/fr
Publication of WO2019071600A1 publication Critical patent/WO2019071600A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the embodiments of the present invention relate to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • the terminal device For each image frame in the video, the terminal device performs frame processing (ie, calculation), rendering, and display of these three tasks.
  • the existing terminal device serially performs a computing task and a rendering task.
  • the terminal device performs a computing task and a rendering task on the Pth (P ⁇ 1) image frames
  • the P+1th image frame is further performed.
  • the rate at which the terminal device processes the image frames is lower.
  • the present application provides an image processing method and apparatus, which can solve the problem that a rate at which a terminal device processes an image frame is low.
  • an image processing method in a first aspect, the image processing apparatus acquiring a first rendering command queue including a plurality of rendering commands corresponding to the Nth (N ⁇ 1) image frames, and including the N+1th image frame After the calculation command queue of the corresponding multiple calculation commands, the Nth image frame is rendered according to the first rendering command queue, and in the process of rendering the Nth image frame, the N+1 is calculated according to the second calculation command queue. All pixels of an image frame.
  • the image processing apparatus in the embodiment of the present application separates the rendering task from the computing task, and in the process of rendering the Nth image frame, calculates all the pixels of the N+1th image frame, and realizes the parallel of the rendering task and the computing task. Execution effectively increases the rate at which the image processing device processes image frames.
  • the image processing apparatus in the method for processing an image frame by the image processing apparatus, the image processing apparatus further acquires a parallel processing message, where the parallel processing message is used to notify that the Nth image is being rendered. During the frame, all pixels of the N+1th image frame are calculated.
  • the method of “retrieving the Nth image frame according to the first rendering command queue and calculating all the pixels of the N+1th image frame according to the second calculation command queue” is specifically: the image processing apparatus adopts the first thread.
  • the image processing apparatus runs the first thread and the second thread in parallel.
  • the image processing device uses the first thread to complete the rendering task, and uses the second thread to complete the computing task. Under the action of the parallel processing message, the image processing device realizes the calculation of the N+1th image frame in the process of rendering the Nth image frame. All pixels.
  • the image processing apparatus after acquiring the first rendering command queue, the image processing apparatus further calculates a feature value of the first rendering command queue according to a preset algorithm, and stores the first A feature value that renders the command queue.
  • N>1 “the image processing apparatus according to the first rendering command queue
  • the method of rendering the Nth image frame is: when the image processing apparatus determines that the feature value of the first rendering command queue is equal to the feature value of the stored target rendering command queue, acquiring the stored Xth corresponding to the target rendering command queue ( N>X ⁇ 1) the rendering result of the image frames, and rendering the Nth image frame according to the rendering result of the Xth image frame.
  • the target rendering command queue includes a plurality of corresponding to the Xth image frame.
  • the Nth image frame is overlapped with the Xth image frame, so that the image processing apparatus directly renders the Nth image frame according to the rendering result of the Xth image frame, without further according to the first rendering command.
  • Each rendering command in the queue renders the Nth image frame, reducing the hardware resources used in image rendering and reducing the power consumption of the image processing device.
  • the image processing apparatus may determine, according to the feature value of the rendering command queue, whether the image frames corresponding to the two rendering command queues are repeated. After calculating the feature value of the first rendering command queue, the image processing device stores the feature value of the first rendering command queue to facilitate determining whether the subsequent image frame is repeated with the Nth image frame.
  • the image processing apparatus further stores a rendering result of the Nth image frame after the Nth image frame is rendered.
  • the image processing apparatus may render the image frame with reference to the rendering result of the stored Nth image frame.
  • an image processing apparatus comprising an acquisition unit, a rendering unit, and a computing unit.
  • the acquiring unit is configured to acquire a first rendering command queue, where the first rendering command queue includes multiple rendering commands corresponding to the Nth image frame, N ⁇ 1, and is used to obtain a first computing command queue,
  • a calculation command queue includes a plurality of calculation commands corresponding to the (N+1)th image frame.
  • the rendering unit is configured to render the Nth image frame according to the first rendering command queue acquired by the acquiring unit.
  • the calculating unit is configured to calculate, in the process of rendering the Nth image frame by the rendering unit, all the pixels of the (N+1)th image frame according to the second calculation command queue acquired by the acquiring unit.
  • the acquiring unit is further configured to acquire a parallel processing message, where the parallel processing message is used to notify that, in the process of rendering the Nth image frame, calculate the N+1th All pixels of the image frame.
  • the rendering unit is specifically configured to execute a rendering command included in the first rendering command queue by using the first thread to render the Nth image frame.
  • the calculating unit is specifically configured to execute, by using the second thread, a calculation command response included in the first calculation command queue, calculate an N+1th image frame, and specifically, in response to the parallel processing message acquired by the acquiring unit, in the foregoing rendering In the process of rendering the Nth image frame by the unit, all pixels of the N+1th image frame are calculated.
  • the calculating unit is specifically configured to calculate, according to a preset algorithm, a feature value of the first rendering command queue acquired by the acquiring unit.
  • the image processing apparatus provided by the embodiment of the present application further includes a storage unit, configured to store a feature value of the first rendering command queue calculated by the calculating unit.
  • the image processing apparatus provided by the embodiment of the present application further includes a determining unit, configured to determine that the feature value of the first rendering command queue calculated by the calculating unit is equal to the stored first value, and the first value is a target rendering command queue.
  • the feature value, the target rendering command queue includes a plurality of rendering commands corresponding to the Xth image frame, N>X ⁇ 1.
  • the obtaining unit is further configured to acquire a rendering result of the stored Xth image frame.
  • the rendering unit is specifically configured to be used according to the rendering result of the Xth image frame acquired by the acquiring unit. Render the Nth image frame.
  • the image processing apparatus provided by the embodiment of the present application further includes a storage unit, configured to store the Nth after the rendering unit renders the Nth image frame. The rendering result of the image frames.
  • an image processing apparatus comprising: one or more processors, a memory, and a communication interface.
  • the memory, communication interface is coupled to one or more processors via a system bus.
  • the image processing device communicates with other devices through a communication interface.
  • the memory is for storing computer program code, the computer program code comprising instructions, when the one or more processors execute the instructions stored in the memory, the image processing apparatus performs the image processing method as described in the first aspect above and any possible implementation thereof .
  • a computer storage medium having stored therein computer program code.
  • the processor of the image processing apparatus in the third aspect executes the computer program code, the image processing apparatus performs the image processing method as in the first aspect described above and any possible implementation thereof.
  • a computer program product comprising instructions which, when run on an image processing device, cause the image processing device to perform an image processing method as in the first aspect described above and any one of its possible implementations.
  • the name of the above image processing apparatus is not limited to the device or the function module itself. In actual implementation, these devices or function modules may appear under other names. As long as the functions of the respective devices or functional modules are similar to the present application, they are within the scope of the claims and their equivalents.
  • FIG. 1 is a schematic flow chart of a method for processing an image frame by a GPU in the prior art
  • FIG. 2 is a schematic structural diagram of hardware of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart diagram of an image processing method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a method for processing an image frame by a GPU according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram 1 of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram 2 of an image processing apparatus according to an embodiment of the present disclosure.
  • the words “exemplary” or “such as” are used to mean an example, illustration, or illustration. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of the words “exemplary” or “such as” is intended to present the concepts in a particular manner.
  • the terminal device For each image frame in the video, the terminal device performs three tasks of calculating, rendering, and displaying. Specifically, a graphics processing unit (Graphics Processing Unit, built in the terminal device) The GPU) performs computational tasks as well as rendering tasks and displays the rendered image frames on the display.
  • a graphics processing unit Graphics Processing Unit, built in the terminal device
  • the GPU performs computational tasks as well as rendering tasks and displays the rendered image frames on the display.
  • terminal devices serially perform computational tasks and rendering tasks.
  • the GPU in the terminal device performs the calculation task of the Pth (P ⁇ 1) image frames
  • the rendering task of the Pth image frame is performed; and the rendering of the Pth image frame is performed.
  • the GPU performs the calculation task of the P+1th image frame
  • the GPU performs the rendering task of the P+1th image frame
  • the GPU performs the rendering task of the P+1th image frame
  • the GPU performs the rendering task of the last image frame. It can be seen that in a scenario including a large number of image frames, the rate at which the terminal device in the method processes the image frame is relatively large.
  • the terminal device in the above method needs to re-execute the computing task and the rendering task, further reducing the processed image frame.
  • the rate, and the power consumption of the terminal device is large.
  • the image processing device separates the computing task from the rendering task, and the parallel computing processing task and the rendering task.
  • the image processing apparatus calculates all the pixels of the (N+1)th image frame in the process of rendering the Nth image frame, effectively improving the rate of processing the image frame.
  • the image processing apparatus in the embodiment of the present application may be a mobile phone, an augmented reality (AR), a virtual reality (VR) device, a tablet computer, a notebook computer, or a super mobile personal computer (UMP).
  • Any terminal device such as a netbook or a personal digital assistant (PDA).
  • PDA personal digital assistant
  • the image processing apparatus in the embodiment of the present application may be the mobile phone 100.
  • the embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of the image processing device described above, and the mobile phone 100 may have more or fewer components than those shown in the figures, and two or more components may be combined. Or it can have different component configurations.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a Wi-Fi device 107, and positioning.
  • Components such as device 108, audio circuit 109, peripheral interface 110, and power system 111. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation to the mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103.
  • processor 101 can include one or more processing units.
  • the processor 101 can integrate an application processor and a modem processor.
  • the application processor mainly processes an operating system, a user interface, an application, and the like; the modem processor mainly processes wireless communication.
  • the above modulation and demodulation processor and the application processor may also be independently set.
  • the processor 101 may include a GPU 115 and a central processing unit (Central)
  • the processing unit (CPU) 116 may also be a combination of the GPU 115, the CPU 116, the digital signal processing (DSP), and a control chip (for example, a baseband chip) in the communication unit.
  • both the GPU 115 and the CPU 116 may be a single operation core, and may also include multiple operation cores.
  • the GPU 115 is a microprocessor that performs image computing operations on personal computers, workstations, game consoles, and some mobile devices (such as tablets, smart phones, etc.). It can convert the display information required by the mobile phone 100 and provide a line scan signal to the display 104-2 to control the correct display of the display 104-2.
  • the mobile phone 100 may send a corresponding drawing command to the GPU 115.
  • the drawing command may be “drawing a rectangle having a length and a width of a ⁇ b at the coordinate position (x, y).
  • the GPU 115 can quickly calculate all the pixels of the graphic according to the drawing instruction, and draw corresponding graphics on the specified position on the display 104-2.
  • the GPU 115 may be integrated in the processor 101 in the form of a functional module, or may be disposed in the mobile phone 100 in a separate physical form (for example, a video card), which is not limited in this embodiment.
  • the GPU 115 performs computational tasks and rendering tasks for image frames.
  • the GPU 115 in the embodiment of the present application includes multiple Stream Processor (SM) units.
  • the SM unit is mainly used to process data transmitted by the CPU and convert the processed data into a digital signal that the display 104-2 can recognize. That is to say, the SM unit in the GPU 115 completes the calculation task and the rendering task of the image frame.
  • the image processing apparatus in the embodiment of the present application uses an idle SM unit to render the Nth image frame, and in the process of rendering the Nth image frame, uses another idle SM unit to calculate the N+1th image frame. All pixels, separation between computational tasks and rendering tasks, and parallel processing, effectively increase the rate at which image frames are processed.
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include high speed random access memory (RAM), and may also include nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, for example, developed by Apple. Operating system, developed by Google Inc. Operating system, etc.
  • the above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
  • the touch screen 104 may specifically include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1), and the collected touch information is sent to other devices (for example, processor 101).
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) Instead, just the user is near the phone to perform the desired function.
  • the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch panel (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. .
  • the touch panel 104-1 may be disposed on the front surface of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 may also be disposed on the front surface of the mobile phone 100 in the form of a full-board, so that the front of the mobile phone can be borderless. Structure.
  • the mobile phone 100 may also have a fingerprint recognition function.
  • the fingerprint recognition collector 112 can be configured on the back of the handset 100 (eg, below the rear camera) or the fingerprint collector 112 can be configured on the front side of the handset 100 (eg, below the touch screen 104).
  • the fingerprint collector 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collector 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint collector 112 is disposed in the touch screen 104, may be part of the touch screen 104, or may be otherwise disposed in the touch screen 104.
  • the main component of the fingerprint collector 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range terminal devices (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetic (statue calibration), vibration recognition related functions (such as pedometer, tapping), etc.; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can be configured in the mobile phone 100, are not described here.
  • the Wi-Fi device 107 is configured to provide the mobile phone 100 with network access complying with the Wi-Fi related standard protocol, and the mobile phone 100 can access the Wi-Fi access point through the Wi-Fi device 107, thereby helping the user to send and receive emails, Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access.
  • the Wi-Fi device 107 can also function as a Wi-Fi wireless access point, and can provide Wi-Fi network access to other terminal devices.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage. In some other embodiments, the positioning device 108 can also be a receiver of an Assisted Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as an auxiliary server.
  • AGPS Assisted Global Positioning System
  • the secondary location server provides location assistance over the wireless communication network in communication with the terminal device (e.g., the location device 108 of the handset 100, i.e., the GPS receiver).
  • the positioning device 108 can also be a Wi-Fi access point based positioning technology. Since each Wi-Fi access point has a globally unique Media Access Control (MAC) address, the phone can scan and collect the surrounding Wi-Fi access points when Wi-Fi is turned on. The broadcast signal, so the MAC address broadcasted by the Wi-Fi access point can be obtained; the mobile phone sends the data (such as the MAC address) capable of indicating the Wi-Fi access point to the location server through the wireless communication network, and is retrieved by the location server. The geographic location of each Wi-Fi access point, combined with the strength of the Wi-Fi broadcast signal, calculates the geographic location of the mobile phone and sends it to the location device 108 of the handset.
  • MAC Media Access Control
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a Universal Serial Bus (USB) interface, and is connected to a Subscriber Identity Module (SIM) card provided by the service provider through a metal contact on the card slot of the subscriber identity module. . Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • USB Universal Serial Bus
  • SIM Subscriber Identity Module
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), A flash lamp, a micro projection device, a Near Field Communication (NFC) device, and the like are not described herein.
  • a camera front camera and/or rear camera
  • a flash lamp a flash lamp
  • a micro projection device a micro projection device
  • NFC Near Field Communication
  • the image processing method includes:
  • the image processing apparatus acquires a sequence of image frames to be processed during the running of the target application.
  • the target application may be any application installed in the image processing device that relies on the GPU for image calculation.
  • the target application may be downloaded from the third-party application market, and may also be an application that is provided by the image processing device system, which is not specifically limited in this embodiment of the present application.
  • the target application is AutoCAD (Auto Computer Aided Design), 3Ds Max (3D Studio Max), Pro/Engineer or video player.
  • the sequence of image frames to be processed includes a plurality of image frames to be processed.
  • the image processing apparatus may directly acquire all the image frames to be processed included in the image frame sequence to be processed, and may also process each image frame to be processed in the image frame sequence in real time. This is not specifically limited.
  • the target application is a live video player
  • the content that the live video player needs to present to the user includes a character shape, an environment background, and the like.
  • the image processing device renders different image frames by Express these elements. Since the live video is displayed in a frame manner, the image processing apparatus needs to acquire each image frame to be processed in real time.
  • the image processing apparatus sequentially generates a calculation command queue and a rendering command queue of each image frame to be processed in the image sequence to be processed.
  • the image processing apparatus After acquiring the image frame to be processed, the image processing apparatus puts all the calculation commands of the image frame to be processed into the calculation command queue of the first preset type, and the image to be processed
  • the frame is that all rendering commands are placed in the rendering command queue of the second preset type, so that the calculation command queue and the rendering command queue of the image frame to be processed can be generated.
  • the image processing apparatus can acquire each image frame to be processed in real time, or directly acquire all the image frames to be processed included in the sequence of image frames to be processed. Regardless of which of the above manners the image processing apparatus acquires the image frame to be processed, the image processing apparatus generates a calculation command queue and a rendering command queue for each image frame to be processed in the order of the time at which the image frames to be processed are acquired.
  • the CPU running the target application in the image processing apparatus generates a calculation command queue and a rendering command queue for each image frame to be processed.
  • the CPU sends the calculation command queue and the rendering command queue of the image frame to be processed to the GPU in the image processing apparatus.
  • the GPU can calculate all the pixels of the image frame to be processed according to the obtained calculation command queue, and the GPU renders the image frame to be processed according to the obtained rendering command queue, and displays it on the display of the image processing apparatus.
  • the image frame to be processed after rendering is the same as the image frame to be processed.
  • the embodiment of the present application is described by taking a GPU to acquire a first calculation command queue and a first rendering command queue as an example.
  • the first rendering command queue includes a plurality of rendering commands corresponding to the Nth image frame, N>1.
  • the first calculation command queue includes a plurality of calculation commands corresponding to the (N+1)th image frame.
  • the Nth image frame and the (N+1)th image frame are both the image frames to be processed described above.
  • the GPU of the image processing apparatus executes the rendering command included in the first rendering command queue by using the first thread to render the Nth image frame.
  • an idle SM unit (referred to as: the first SM unit) in the GPU of the image processing apparatus uses a first thread to execute a rendering command included in the first rendering command queue to complete a rendering task of the Nth image frame. Implement the rendering of the Nth image frame.
  • the method for the first SM unit to render the Nth image frame is: the first SM unit first calculates a feature value of the first rendering command queue according to a preset algorithm; and then, the first SM unit determines whether the local storage unit stores The feature values of other render command queues whose feature values of the first render command queue are equal.
  • the target rendering command queue includes multiple corresponding to the Xth (N>X ⁇ 1) image frames.
  • the rendering command indicates that the Nth image frame is overlapped with the Xth image frame, and the first SM unit can directly render the Nth image frame by using the rendering result of the X image frames, and the rendering result may include texture information and shadow information. Wait.
  • the first SM unit locally obtains a rendering result of the Xth (N>X ⁇ 1) image frames corresponding to the target rendering command queue, and renders the Nth image frame according to the rendering result of the X image frames.
  • the process of rendering the Nth image frame does not need to occupy additional hardware resources, which can reduce the hardware resources used in image rendering and reduce the power consumption of the image processing device.
  • the value indicates that the Nth image frame is changed compared to the previous image frame, and the first SM unit calls all rendering commands in the first rendering command queue to render the Nth image frame.
  • the process of rendering the Nth image frame requires additional hardware resources such as memory, memory, and the like.
  • the feature value of the rendering command queue in the embodiment of the present application may be a Secure Hash Algorithm (SHA) value of the rendering command queue.
  • SHA Secure Hash Algorithm
  • the image processing apparatus in the embodiment of the present application calculates the feature value of the first rendering command queue, that is, the image processing device calculates the SHA value of the first rendering command queue.
  • the method for the image processing apparatus to calculate the SHA value of the first rendering command queue is: the first SM unit determines the first rendering command queue by calculating a stored SHA value of the plurality of rendering commands in the first rendering command queue. SHA value.
  • the first SM unit may directly obtain the same target command queue as the first rendering command queue from the local storage, and directly compare the Xth image corresponding to the target command queue with the command in the first rendering command queue.
  • the rendering result of the frame is the result of rendering the Nth image frame.
  • the image processing device further stores the rendering result of the Nth image frame, which is convenient for rendering the subsequent repeated image frame.
  • the GPU in the image processing apparatus uses a second thread to execute a calculation command response included in the first calculation command queue, and calculates an N+1th image frame.
  • Another idle SM unit (abbreviated as: the second SM unit) in the GPU of the image processing apparatus executes the calculation command response included in the first calculation command queue by using the second thread, and calculates the N+1th image frame.
  • the image processing apparatus acquires a parallel processing message, where the line processing message is used to notify that all pixels of the (N+1)th image frame are calculated in the process of rendering the Nth image frame.
  • the CPU further generates and Line processing messages. After generating the parallel processing message, the CPU sends the parallel processing message to the GPU.
  • the GPU in the image processing apparatus responds to the parallel processing message, and runs the first thread and the second thread in parallel.
  • the first SM unit and the second SM unit respond to the parallel processing message, so that the N+1th image frame is calculated during the process of rendering the Nth image frame. All pixels.
  • FIG. 4 shows a flow of processing an image frame by the image processing apparatus in the embodiment of the present application.
  • the GPU in the prior art performs the computing task and the rendering task serially, and the next image frame can be processed only after the current image frame is rendered.
  • the GPU in the embodiment of the present application can calculate all pixels of the N+1th image frame.
  • the image processing apparatus provided by the embodiment of the present application processes the image frame more efficiently.
  • the GPU in the embodiment of the present application can directly render the repeated image frames according to the stored rendering results, reduce hardware resources occupied during image rendering, and reduce power consumption of the image processing apparatus.
  • An embodiment of the present application provides an image processing apparatus for performing the steps performed by the image processing apparatus in the above image processing method.
  • the image processing apparatus provided by the embodiment of the present application may include a module corresponding to the corresponding step.
  • the embodiment of the present application may divide the function module into the image processing device according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and may be further divided in actual implementation.
  • FIG. 5 shows a possible structural diagram of the image processing apparatus involved in the above embodiments.
  • the image processing apparatus includes an acquisition unit 501, a calculation unit 502, a rendering unit 503, a storage unit 504, and a determination unit 505.
  • the obtaining unit 501 is configured to support the image processing apparatus to execute S300, S301, and S304 in FIG. 3; the calculating unit 502 is configured to support the image processing apparatus to execute S303 in FIG. 3; and the rendering unit 503 is configured to support the image processing apparatus to perform the image processing apparatus in FIG. S302; the storage unit 504 is configured to support the image processing device to store the rendering result of the rendered image frame, and is further configured to support the image processing device to store the SHA value of the rendering command queue of the rendered image frame; the determining unit 505 is configured to support the image processing device It is determined whether the feature value of the other rendering command queues having the same feature value as the first rendering command queue is stored locally. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 6 shows a possible structural diagram of the image processing apparatus involved in the above embodiment.
  • the image processing apparatus includes a processing module 60 and a communication module 61.
  • the processing module 60 is configured to control and manage the actions of the image processing apparatus.
  • the processing module 60 is configured to support the image processing apparatus to perform S301 to S305 in FIG. 3, and/or other techniques for the techniques described in the present application. process.
  • the communication module 61 is for supporting communication of the image processing apparatus with an external device.
  • the communication module 61 is for supporting the image processing apparatus to execute S300 in FIG.
  • the processing module 60 in the embodiment of the present application may be a processor or a controller, and may be, for example, a CPU, a GPU, or a DSP. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the communication module 61 in the embodiment of the present application may be a Bluetooth module for interacting with external devices, and may further include an RF circuit corresponding to the Bluetooth module.
  • the RF circuit is used to receive and transmit signals during the transmission or reception of information or during a call.
  • the communication module 61 in the embodiment of the present application may also be a communication interface for interacting with external devices, the full name of the English: Communication Interface.
  • the communication module may include two communication interfaces, a transmission interface for transmitting data to an external device and a receiving interface for receiving data from the external device, that is, the image processing device may implement data through two different communication interfaces respectively. The receipt and transmission of data.
  • the communication module 61 can integrate the data receiving function and the data transmitting function on a communication interface having a data receiving function and a data transmitting function.
  • the communication interface can be integrated on a Bluetooth chip or an NFC chip.
  • the communication module 61 in the embodiment of the present application may also be a transceiver or a transceiver circuit or the like.
  • the image processing apparatus may further include an input module 62 for implementing interaction between the user and the image processing apparatus.
  • the input module 62 can receive numeric or character information input by the user to generate a signal input related to user setting or function control.
  • the input module 62 may be a touch panel, or may be other human-computer interaction interfaces, such as physical input keys, microphones, etc., and may also be other external information capture devices, such as cameras.
  • the physical input keys employed by the input module 62 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.).
  • the input module 62 in the form of a microphone can collect the voice input by the user or the environment and convert it into a command executable by the processing module 60 in the form of an electrical signal.
  • the image processing apparatus may further include an output module 63 for implementing interaction between the user and the image processing apparatus.
  • the output module 63 includes, but is not limited to, an image output module and a sound output module.
  • the image output module is used to output text, pictures and/or video.
  • the image output module may include a display panel, such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), a field emission display (FED), or the like. Display panel.
  • the image output module may include a single display panel or a plurality of display panels of different sizes.
  • the touch panel used by the input module 62 can also serve as the display panel of the output module 63.
  • the touch panel detects a touch or proximity gesture operation thereon, it is transmitted to the processing module 60 to determine the type of the touch event, and then the processing module 60 provides a corresponding visual output on the display panel according to the type of the touch event. .
  • the input module 62 and the output module 63 can function as two separate components to implement the input and output functions of the image processing apparatus, but in some embodiments, the input module 62 can be integrated with the output module 63.
  • the input and output functions of the image processing apparatus are implemented (the input module 62 and the output module 63 are included in a broken line frame as shown in FIG. 6 to indicate that the input module 62 and the output module 63 are integrated into one body).
  • the image processing apparatus in the embodiment of the present application further includes a storage module 64, configured to store the rendered
  • the rendering result of the image frame is also used to support the image processing device to store the SHA value of the rendering command queue of the rendered image frame.
  • the processing module 60 in the embodiment of the present application may be the processor 101 in FIG. 2, the communication module 61 may be the antenna in FIG. 2, the input module 62 may be 112 and 104-1 in FIG. 2, and the output module 63 may be In the display 104-2 of FIG. 2, the storage module 64 may be the memory 103 of FIG.
  • the image processing apparatus executes the image processing method of the embodiment as shown in FIG.
  • the image processing apparatus executes the image processing method of the embodiment as shown in FIG.
  • the image processing apparatus executes the image processing method of the embodiment as shown in FIG.
  • Another embodiment of the present application also provides a computer readable storage medium including one or more program codes, the one or more programs including instructions, when a processor in an image processing apparatus is executing the At the time of the program code, the image processing apparatus executes the image processing method as shown in FIG.
  • a computer program product comprising computer executed instructions stored in a computer readable storage medium; at least one processor of the image processing apparatus is The computer readable storage medium reads the computer executable instructions, and the at least one processor executes the computer to execute the instructions such that the image processing apparatus implements the steps in performing the image processing method illustrated in FIG.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: flash memory, mobile

Abstract

L'invention concerne un procédé et un appareil de traitement d'image, lesquels rapportent au domaine du traitement d'images et sont capables de résoudre le problème selon lequel la vitesse à laquelle un dispositif terminal traite une trame d'image est lente. Le procédé consiste à : acquérir une première file d'attente d'instructions de rendu, laquelle contient de multiples instructions de rendu correspondant à une Nième trame d'image et avec N supérieur ou égal à 1 ; acquérir une première file d'attente d'instructions de calcul, laquelle contient de multiples instructions de calcul correspondant à une (N+1)ième trame d'image ; et conformément à la première file d'attente d'instructions de rendu, effectuer le rendu de la Nième trame d'image, et pendant le processus de rendu de la Nième trame d'image, calculer tous les pixels de la (N+1)ième trame d'image conformément à une deuxième file d'attente d'instructions de calcul.
PCT/CN2017/106153 2017-10-13 2017-10-13 Procédé et appareil de traitement d'image WO2019071600A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780066411.2A CN109891388A (zh) 2017-10-13 2017-10-13 一种图像处理方法及装置
PCT/CN2017/106153 WO2019071600A1 (fr) 2017-10-13 2017-10-13 Procédé et appareil de traitement d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106153 WO2019071600A1 (fr) 2017-10-13 2017-10-13 Procédé et appareil de traitement d'image

Publications (1)

Publication Number Publication Date
WO2019071600A1 true WO2019071600A1 (fr) 2019-04-18

Family

ID=66101228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106153 WO2019071600A1 (fr) 2017-10-13 2017-10-13 Procédé et appareil de traitement d'image

Country Status (2)

Country Link
CN (1) CN109891388A (fr)
WO (1) WO2019071600A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555900B (zh) * 2019-09-05 2023-11-17 网易(杭州)网络有限公司 渲染指令的处理方法及装置、存储介质、电子设备
CN111340681B (zh) * 2020-02-10 2024-02-20 青岛海信宽带多媒体技术有限公司 一种图像处理方法及装置
CN111651131B (zh) * 2020-05-18 2024-02-27 武汉联影医疗科技有限公司 图像的显示方法、装置和计算机设备
CN112652025B (zh) * 2020-12-18 2022-03-22 完美世界(北京)软件科技发展有限公司 图像渲染方法、装置、计算机设备及可读存储介质
CN114443189B (zh) * 2021-08-20 2023-01-13 荣耀终端有限公司 一种图像处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080211822A1 (en) * 2004-06-23 2008-09-04 Nhn Corporation Method and System For Loading of Image Resource
CN103593168A (zh) * 2012-08-13 2014-02-19 株式会社突飞软件 利用多重处理的渲染处理装置及方法
CN105631921A (zh) * 2015-12-18 2016-06-01 网易(杭州)网络有限公司 图像数据的处理方法及装置
CN105701852A (zh) * 2014-12-09 2016-06-22 三星电子株式会社 用于渲染的设备和方法
CN107203960A (zh) * 2016-06-30 2017-09-26 北京新媒传信科技有限公司 图像渲染方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080211822A1 (en) * 2004-06-23 2008-09-04 Nhn Corporation Method and System For Loading of Image Resource
CN103593168A (zh) * 2012-08-13 2014-02-19 株式会社突飞软件 利用多重处理的渲染处理装置及方法
CN105701852A (zh) * 2014-12-09 2016-06-22 三星电子株式会社 用于渲染的设备和方法
CN105631921A (zh) * 2015-12-18 2016-06-01 网易(杭州)网络有限公司 图像数据的处理方法及装置
CN107203960A (zh) * 2016-06-30 2017-09-26 北京新媒传信科技有限公司 图像渲染方法及装置

Also Published As

Publication number Publication date
CN109891388A (zh) 2019-06-14

Similar Documents

Publication Publication Date Title
US9881353B2 (en) Buffers for display acceleration
US11809705B2 (en) Touch control method and apparatus
WO2019071600A1 (fr) Procédé et appareil de traitement d'image
WO2019183785A1 (fr) Procédé et terminal de réglage de fréquence de trames
CN111712787B (zh) 一种显示控制方法及终端
CN108513671B (zh) 一种2d应用在vr设备中的显示方法及终端
WO2019028912A1 (fr) Procédé et dispositif de commutation d'application
WO2019149028A1 (fr) Procédé de téléchargement en aval d'application et terminal
US20220283676A1 (en) Application Window Display Method and Terminal
CN110178111B (zh) 一种终端的图像处理方法及装置
WO2015014135A1 (fr) Procédé et appareil de commande de pointeur de souris et dispositif de terminal
US9565290B2 (en) Method for performing function in call mode and portable electronic device for implementing the method
CN114143280B (zh) 会话显示方法、装置、电子设备及存储介质
CN113485596B (zh) 虚拟模型的处理方法、装置、电子设备及存储介质
US11327845B2 (en) Image synchronization method and device, and server
CN113448692A (zh) 分布式图计算的方法、装置、设备及存储介质
CN112260845A (zh) 进行数据传输加速的方法和装置
CN115037702B (zh) 报文分发、数据发送方法及设备
CN111526221B (zh) 域名质量确定方法、装置及存储介质
CN114785766B (zh) 智能设备的控制方法、终端及服务器
WO2023274125A1 (fr) Procédé de traitement d'appel et son dispositif associé
CN115271837A (zh) 分流方法、装置、设备及存储介质
CN114296620A (zh) 信息交互方法、装置、电子设备及存储介质
CN115658207A (zh) 应用程序的管理方法、装置、设备及存储介质
CN111523876A (zh) 支付方式的显示方法、装置、系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17928373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17928373

Country of ref document: EP

Kind code of ref document: A1