WO2021237736A1 - Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur - Google Patents
Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur Download PDFInfo
- Publication number
- WO2021237736A1 WO2021237736A1 PCT/CN2020/093519 CN2020093519W WO2021237736A1 WO 2021237736 A1 WO2021237736 A1 WO 2021237736A1 CN 2020093519 W CN2020093519 W CN 2020093519W WO 2021237736 A1 WO2021237736 A1 WO 2021237736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processing
- image
- original image
- processing unit
- intermediate data
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the present disclosure relates to the field of image processing technology, in particular, to image processing methods, devices and systems, and computer-readable storage media.
- Image processing often includes multiple stages of processing.
- the traditional image processing method generally uses a processing unit to execute the multiple stages of processing in series, that is, the first stage of processing is performed first. After the processing is completed, the second stage of processing is executed, after the second stage of processing is completed, the third stage of processing is executed, and so on.
- the processing efficiency of this image processing method is low.
- the embodiments of the present disclosure propose image processing methods, devices and systems, and computer-readable storage media to solve the technical problem of low image processing efficiency in related technologies.
- an image processing method for performing rendering processing on an original image including a first processing and a second processing that can be processed in parallel, the method including: The processing unit performs first processing on the original image to generate first intermediate data; while the first processing unit performs the first processing, the second processing unit performs a second processing on the original image to generate Second intermediate data; after the second processing is completed, the second processing unit sends the second intermediate data to the first processing unit; the first processing unit after the first processing is completed , Generating a rendered image according to the first intermediate data and the second intermediate data.
- an image processing method applied to a first processing unit the method is used to perform rendering processing on an original image, and the rendering processing includes a first processing and a second processing that can be processed in parallel.
- Second processing the method includes: performing first processing on the original image to generate first intermediate data; receiving second intermediate data sent by the second processing unit after the second processing is completed; wherein, the second While performing the first processing, the processing unit performs the second processing on the original image to generate the second intermediate data; after the first processing is completed, according to the first intermediate data and the The second intermediate data generates a rendered image.
- an image processing method applied to a second processing unit is used to perform rendering processing on an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel.
- Second processing the method includes: while the first processing unit performs the first processing on the original image to generate first intermediate data, performing the second processing on the original image to generate second intermediate data;
- the intermediate data is sent to the first processing unit, so that the first processing unit generates a rendered image according to the first intermediate data and the second intermediate data after the first processing is completed.
- an image processing device applied to a first processing unit, and configured to perform rendering processing on an original image, the rendering processing including a first processing and a second processing that can be processed in parallel, It includes a memory, a processor, and a computer program that is stored on the memory and can run on the processor.
- the processor implements the following steps when executing the program: first processing the original image to generate first intermediate data; The second intermediate data sent by the second processing unit after the second processing is completed; wherein the second processing unit performs the second processing on the original image while performing the first processing to generate The second intermediate data; after the first processing is completed, a rendered image is generated according to the first intermediate data and the second intermediate data.
- an image processing device which is applied to a second processing unit and is used to perform rendering processing on an original image, the rendering processing including a first processing and a second processing that can be processed in parallel, It includes a memory, a processor, and a computer program that is stored on the memory and can be run on the processor.
- the processor executes the program, the following steps are implemented: the first processing unit performs first processing on the original image to generate a first At the same time as the intermediate data, the original image is subjected to a second processing to generate second intermediate data; the second intermediate data is sent to the first processing unit, so that the first processing unit is in the first processing unit. After the processing is completed, a rendered image is generated according to the first intermediate data and the second intermediate data.
- an image processing system for performing rendering processing on an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel, and the system includes: A processing unit and a second processing unit; the second processing unit is used to perform a second processing on the original image to generate second intermediate data while the first processing unit performs the first processing on the original image, and After the second processing is completed, the second intermediate data is sent to the first processing unit; the first processing unit is used to perform the first processing on the original image to generate the first intermediate data, and the After the first processing is completed, a rendered image is generated according to the first intermediate data and the second intermediate data.
- a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method described in any of the embodiments is implemented.
- the second processing unit performs the second processing on the original image in parallel to generate the second intermediate data, so that After the first processing unit completes the first processing, it can immediately generate a rendered image based on the first intermediate data and the second intermediate data, which saves the time for the first processing unit to perform the second processing and improves the image processing efficiency; in addition, in the image
- the second processing unit is fully utilized in the processing process, which prevents the second processing unit from being in an idle state, and improves the utilization rate of the second processing unit.
- Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
- Fig. 2 is a sequence diagram of an image processing method according to an embodiment of the present disclosure.
- Fig. 3A is a schematic diagram of the time required for a conventional image processing method.
- FIG. 3B is a schematic diagram of the time required for the image processing method of the embodiment of the present disclosure.
- Fig. 4 is a schematic diagram of an image processing method according to a specific embodiment of the present disclosure.
- FIG. 5A is a schematic diagram of the execution process of traditional decoding, rendering, and display.
- FIG. 5B is a schematic diagram of the execution process of decoding, rendering, and display according to an embodiment of the present disclosure.
- Fig. 6 is an overall flowchart of an image processing method according to an embodiment of the present disclosure.
- Fig. 7 is a flowchart of an image processing method according to another embodiment of the present disclosure.
- Fig. 8 is a flowchart of an image processing method according to still another embodiment of the present disclosure.
- Fig. 9 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
- Fig. 10 is a schematic diagram of an image processing system according to an embodiment of the present disclosure.
- first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
- word “if” as used herein can be interpreted as "when” or “when” or “in response to a certainty”.
- Image processing often includes multiple stages of processing.
- the traditional image processing method generally uses a processing unit to execute the multiple stages of processing in series, that is, the first stage of processing is performed first. After the processing is completed, the second stage of processing is executed, after the second stage of processing is completed, the third stage of processing is executed, and so on.
- GPU Graphics Processing Unit, graphics processing unit
- GPU can perform the following operations in sequence: first perform skinning and whitening processing on the face image, and then determine the facial features in the face image According to the position of the facial features, face and eye shapes are processed to generate the final beauty image.
- the processing efficiency of the above-mentioned image processing methods is low.
- an embodiment of the present disclosure provides an image processing method for rendering an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel, as shown in FIGS. 1 and 2, so
- the methods include:
- Step 101 The first processing unit performs first processing on the original image to generate first intermediate data; while the first processing unit performs the first processing, the second processing unit performs the first processing on the original image. Performing the second processing to generate second intermediate data;
- Step 102 After the second processing is completed, the second processing unit sends the second intermediate data to the first processing unit;
- Step 103 After the first processing is completed, the first processing unit generates a rendered image according to the first intermediate data and the second intermediate data.
- the first processing performed by the first processing unit and the second processing performed by the second processing unit may be performed in parallel to generate the first intermediate data and the second intermediate data correspondingly.
- a rendered image can be generated immediately based on the first intermediate data and the second intermediate data sent by the second processing unit, which improves image processing Efficiency, thereby improving the real-time nature of image processing.
- the traditional image processing method is to perform the first processing serially through the first processing unit .
- the second processing and the third processing assuming that the processing times are t1, t2, and t3, respectively, the processing time required by the first processing unit is t1+t2+t3 in total.
- the embodiment of the present disclosure executes the second processing through the second processing unit while the first processing unit executes the first processing.
- the time t4 required by the unit to perform the second processing is not included in the processing time of the first processing unit. Therefore, the total processing time required by the first processing unit is t1+t3.
- the original image may be an image taken in advance and stored in a storage unit, or an image frame in a video stream taken in advance and stored in a storage unit, wherein the storage unit It can be a storage unit of electronic devices such as mobile phones, tablet computers, and notebook computers. Therefore, the original image may be decoded from the storage unit by the first processing unit.
- the original image may be an image captured by an image capture device in real time, or an image frame in a video stream captured by an image capture device in real time, wherein the image capture device may be a camera, Cameras and other devices with image capture capabilities. Therefore, the original image can be obtained from the video stream collected by the image collecting device.
- the original image before the second processing unit performs the second processing on the original image to generate the second intermediate data, the original image may be obtained by the first processing unit, and the obtained original image may be sent to the second processing unit. Unit, so that the second processing unit executes the second processing.
- the first processing unit may also perform down-sampling processing on the original image to obtain a down-sampled image, and then send the down-sampled image to the second processing unit, and the second processing unit may down-sample the down-sampled image. The image undergoes a second processing to generate second intermediate data.
- the first processing may be a first rendering processing
- the first intermediate data is a first image
- the second processing is target detection
- the second intermediate data is a Position information of the target object
- the third process may be a second rendering process.
- the method includes:
- Step 401 The first processing unit performs first rendering processing on the original image to generate a first image; the second processing unit performs target detection on the original image to obtain position information of the target object in the original image ;
- Step 402 The second processing unit sends the location information of the target object to the first processing unit;
- Step 403 After the first rendering processing is completed, the first processing unit performs second rendering processing on the first image to obtain the rendered image.
- both the first rendering processing and the second rendering processing may include, but are not limited to, at least one of brightness processing, color processing, resolution processing, and mosaic processing.
- at least one of the first rendering process and the second rendering process may be a beautification process, that is, a beautification process is performed on the face area in the original image.
- the first rendering processing may be performing beauty processing on the original image
- the second rendering processing may be performing color processing on the background area (that is, the area other than the face area) in the original image, To change the color of the background area.
- the first rendering processing may be performing mosaic processing on the original image to mosaic a specific area on the original image
- the second rendering processing may be performing beauty processing on the original image.
- both the first rendering processing and the second rendering processing are beauty processing.
- the first rendering process may include at least one of a dermabrasion process and a skin tone process.
- the skin color processing is used to set the skin color of the face area as a specified color, and a more common skin color processing is whitening processing.
- the second rendering processing is a beauty processing
- the second rendering processing may include at least one of face shape processing, facial features processing, and concealing processing.
- the face shape processing is used to adjust any one of the shape and size of the human face in the original image, and one of the more common face shape processing is face-lift processing.
- the facial features processing is used to adjust any one of the shape and size of the facial features, and a more common facial features processing is the big-eye processing, that is, increasing the size of the eyes.
- the concealing process is used to remove blemishes in the face area in the original image, or to conceal the blemishes through a specific pattern.
- the target object may be a face region in the original image
- the second processing unit may perform face detection on the original image to obtain face position information.
- the first processing unit may perform face shape processing on the first image according to the face position information to obtain the rendered image.
- the target object may be the five sense organs (that is, at least one of eyebrows, eyes, ears, nose, and mouth) in the face area
- the second processing unit may treat the original
- the five sense organs are detected on the image to obtain the five sense organs location information
- the five sense senses location information may include the location information of at least one feature point of the five sense senses.
- the first processing unit may perform facial features processing on the face region in the original image according to the facial features position information.
- the second processing unit may obtain face information in the original image, and obtain information about the original image according to the face information.
- Five sense organs information wherein, the face information includes face position information and face size information.
- the face position information may use the position information of one or more vertices (for example, the top left corner) of the bounding box of the face area or a certain point in the face area (for example, The position information of the center point of the face area) is expressed, and the face size information may be expressed by the pixel size of the length and width of the bounding box.
- the face position information may be a piece of normalized position information.
- the pixel length and pixel width of the face area in the original image (which may be the original image after downsampling) obtained by the second processing unit are M and N, respectively, and a certain point in the face area is the xth in the length direction A point is the y-th point in the width direction, and the face position information of this point can be expressed as (x/M, y/N).
- the face position information in the original image before downsampling can be directly obtained from the face position information in the original image after downsampling.
- the face position information may also be the absolute position information in the original image after downsampling.
- the absolute position information After the absolute position information is obtained, it is based on the pixels of the original image after downsampling and the original image before downsampling. Size, the absolute position information is converted into the face position information in the original image before downsampling. In the same way, the position information of the five sense organs may also be a normalized position information.
- the second processing unit may obtain the face position information in the original image, and send the face position information to the first processing unit, and then, the first processing unit according to the Face position information, intercept a face area image from the original image, send the face area image to the second processing unit, and then the second processing unit performs facial features on the face area image Detect and get the location information of the five sense organs. Further, the second processing unit may also obtain face size information in the original image, and send the face position information and face size information to the first processing unit, so that the first processing unit A processing unit intercepts a face region image from the original image according to the face position information and face size information.
- the face area image Before sending the face area image to the second processing unit, the face area image may also be processed so that the face area image reaches a preset size (for example, 160x160). This process It is called the resize of the face area image.
- the preset size may be determined according to the detection algorithm used for the facial features detection. For example, facial features detection can be performed through a pre-trained neural network, and the neural network can be trained by training images of the preset size. Therefore, before sending the face region image to the second processing unit, The face area image may be resized to the preset size, so that the neural network can process it.
- the first processing unit may also scale the face area image to a preset size, and send the scaled face area image to the second processing unit, so that the second processing The unit performs facial features detection on the zoomed face area image.
- the second processing unit may perform face detection on the original image to obtain the face position information.
- the second processing unit may also obtain the face position in the previous frame of the original image in the video stream Information, using the face position information in the last frame of image as the face position information in the original image.
- the position of the face area in the adjacent image frames in the video stream is relatively fixed. Therefore, after the original image is obtained, the facial features can be detected on the original image immediately. If the face detection has not been completed at this time, you can directly use the face position information in the previous frame of image to perform facial features detection, thereby further improving the image processing efficiency.
- the embodiment of the present disclosure separates the logic of facial features detection and face detection processing, and reduces the dependence between the two. Since the face area between the original images of two adjacent frames does not change much, if the face detection of the current frame cannot be completed immediately, the face result of the previous frame can be used to improve the processing speed.
- the target object may be a defect in the face area (for example, at least one of stains, acne marks, and moles), and the first processing unit performs the first rendering After the processing is completed, concealment processing may be performed on the face region according to the position information of the facial flaw.
- the target object includes at least two of the face area, the five sense organs, and the blemish. For example, if the target object includes the face area and the facial features, the second intermediate data includes not only facial location information, but also facial features.
- the first processing unit may further display the rendered image. Further, when the rendered image is displayed, other processing may be performed on the rendered image, for example, filter processing, which will not be repeated here.
- the generating of first intermediate data after performing the first processing on the original image and the generating of the rendered image according to the first intermediate data and the second intermediate data may be executed by a rendering thread. Displaying the rendered image may be performed by a display thread, and the acquisition of the original image may be performed by an image acquisition thread; wherein, the rendering thread, the display thread, and the image acquisition thread perform corresponding operations in parallel.
- the generated rendered image is displayed by the display thread, and the original image to be rendered is acquired by the image acquisition thread; wherein, the corresponding to-be-rendered image in the image acquisition thread
- the number of original images is the preset number.
- the three processes of image acquisition, image rendering, and image display are executed serially, that is, the image acquisition thread first acquires the original image 1, and after the acquisition is completed, The thread renders the original image 1 to obtain the rendered image 1, and then the display thread displays the rendered image 1. After the rendered image 1 is displayed, the image acquisition thread starts to acquire the original image 2.
- the processing efficiency of this processing method is relatively low.
- the image acquisition, rendering, and display processes are executed in parallel.
- the rendering time is the longest, as long as the rendering time can be guaranteed in real time (for a 30 frame/second scene, the processing time per frame needs to be less than 33 milliseconds. ), then the entire link can be real-time.
- Figure 5B shows this parallel timing, where the decoding queue size is 2.
- the rendering thread can start to render the first frame.
- the decoding queue is empty and can continue to decode the second and third frames.
- the decoding queue is full, wait for the rendering thread to fetch the next frame;
- the display thread can display the first frame, and the rendering thread takes the second frame from the decoding queue for rendering.
- the decoding queue vacates a position and the fourth frame can be decoded. Put them in the queue and wait, and so on. In this way, the processing efficiency of the image acquisition, rendering, and display process is improved.
- the computing power required for the first processing is greater than the computing power required for the second processing.
- the first processing is a processing with a higher degree of parallelism
- the second processing unit is a processing with a lower degree of parallelism.
- the first processing unit may be a GPU
- the second processing unit may be a CPU (Central Processing Unit, central processing unit).
- the original image is an image obtained by decoding from a storage unit.
- the image acquisition thread is a decoding thread
- the first processing unit is a GPU
- the second processing unit is a CPU.
- the original image is decoded and enters the decoding queue for rendering.
- the data is on the GPU.
- the rendering thread fetches the original image and starts to perform beauty rendering, and the rendered data is put into the rendering queue.
- the face area image in the original image is first processed on the GPU for skinning and whitening, and the face detection thread and the facial features detection thread are turned on at the same time.
- the face detection thread first performs down-sampling on the GPU, then copies the down-sampled data from the GPU to the CPU, uses the down-sampled original image to perform face detection on the CPU, and outputs face position information and face size information to Use of facial features detection thread.
- the facial features detection thread intercepts the face area image from the original image and resizes it to a preset size according to the face position information and face size information, and copies the resized face area image from the GPU to the CPU, using a standard size person
- the facial area image is detected by facial features.
- the facial features detection data is sent to the rendering thread. If the whitening and dermabrasion has been completed at this time, the rendering thread can proceed to face shape processing and facial features processing. After all the processing is completed, the beautified data is put into the rendering queue, and the data is on the GPU at this time.
- the display thread waits until there is data in the rendering queue and then takes it out and displays it on the screen.
- microdermabrasion and whitening need to operate on the original image, performing these two kinds of processing through GPU can make full use of GPU resources.
- the GPU computing power is already full, and the CPU is still waiting idle, so part of the work is transferred to the CPU for parallel calculation.
- the image area processed by face detection and facial features detection is smaller, which is more suitable for calculation on the CPU side.
- the rendering processing in the embodiment of the present disclosure may also include other processing (for example, head posture detection), according to the requirements of the other processing.
- At least one of the computing resources and the degree of parallelism can selectively allocate the other processing to the first processing unit or the second processing unit for processing, which will not be repeated here.
- an embodiment of the present disclosure also provides an image processing method, which is applied to a first processing unit, and the method is used to perform rendering processing on an original image, and the rendering processing includes a first processing that can be processed in parallel and The second processing, the method includes:
- Step 701 Perform first processing on the original image to generate first intermediate data
- Step 702 Receive second intermediate data sent by the second processing unit after the second processing is completed; wherein, while performing the first processing, the second processing unit performs all the processing on the original image. The second processing generates the second intermediate data;
- Step 703 After the first processing is completed, a rendered image is generated according to the first intermediate data and the second intermediate data.
- the method further includes: performing down-sampling processing on the original image to obtain a down-sampled image; sending the down-sampled image to a second processing unit for the second processing.
- the performing the first processing on the original image to generate the first intermediate data includes: performing the first rendering processing on the original image to obtain the first image; wherein, the second processing includes performing the first rendering processing on the original image; Performing target detection on the original image, the second intermediate data includes position information of the target object in the original image; and generating a rendered image according to the first intermediate data and the second intermediate data includes: According to the position information of the target object, a second rendering process is performed on the first image to obtain the rendered image.
- At least one of the first rendering process and the second rendering process is beauty processing.
- the target object is a human face, facial features, or facial blemishes
- the second processing unit performs second processing based on the following method to generate second intermediate data: perform face detection on the original image to obtain Human face position information; or performing facial feature detection on the original image to obtain facial feature position information; or performing facial blemish detection on the original image to obtain position information of the facial blemish.
- the second processing unit performing the second processing on the original image to generate the second intermediate data includes: the second processing unit obtains the face information in the original image, and according to the person The facial information acquires the facial features information of the original image; wherein the facial information includes facial position information and facial size information.
- the method further includes: acquiring face location information sent by the second processing unit; According to the face position information, a face area image is intercepted from the original image, and the face area image is sent to the second processing unit for facial features detection.
- the original image is an image frame in a video stream; the second processing unit obtains face position information in the original image based on the following method: performing face detection on the original image , Obtain the face position information; or obtain the face position information in the previous frame image of the original image in the video stream, and use the face position information in the previous frame image as the original image Face position information in.
- the sending the face area image to the second processing unit includes: scaling the face area image to a preset size; sending the scaled face area image To the second processing unit to perform facial features detection.
- the first rendering process includes at least one of a skin refining process and a skin tone process; and/or the second rendering process includes at least one of face shape processing, facial features processing, and concealer processing .
- the method further includes: displaying the rendered image.
- the method further includes: obtaining the original image, wherein the original image is obtained by decoding from a storage unit or obtained from a video stream collected by an image collecting device.
- the performing the first processing on the original image to generate first intermediate data and the generating of the rendered image according to the first intermediate data and the second intermediate data are performed by a rendering thread
- the Displaying the rendered image is performed by a display thread
- the acquisition of the original image is performed by an image acquisition thread; wherein the rendering thread, the display thread, and the image acquisition thread perform corresponding operations in parallel.
- the generated rendered image is displayed by the display thread, and the original image to be rendered is acquired by the image acquisition thread; wherein, the corresponding to-be-rendered image in the image acquisition thread
- the number of original images to be rendered is a preset number.
- the computing power required for the first processing is greater than the computing power required for the second processing.
- the first processing unit is a graphics processor
- the second processing unit is a central processing unit
- the method in the embodiment of the present disclosure may be executed by a first processing unit.
- a first processing unit please refer to the embodiment of the first processing unit in the aforementioned image processing method, which will not be repeated here.
- an embodiment of the present disclosure also provides an image processing method applied to a second processing unit, the method is used to perform rendering processing on an original image, and the rendering processing includes first processing and processing that can be processed in parallel.
- the second processing the method includes:
- Step 801 While the first processing unit performs first processing on the original image to generate first intermediate data, perform second processing on the original image to generate second intermediate data;
- Step 802 Send the second intermediate data to the first processing unit, so that after the first processing is completed, the first processing unit performs data based on the first intermediate data and the second intermediate data. Generate a rendered image.
- the acquiring the original image includes: receiving the down-sampled original image sent by the first processing unit.
- the first processing includes a first rendering processing
- the first intermediate data includes a first image
- the performing a second processing on the original image to generate second intermediate data includes:
- the original image is subjected to target detection to obtain the position information of the target object in the original image;
- the first processing unit generates a rendered image based on the following method: according to the position information of the target object, perform the first image on the first image Second, rendering processing to obtain the rendered image.
- At least one of the first rendering process and the second rendering process is beauty processing.
- the target object is a human face, facial features, or facial blemishes
- the performing the second processing on the original image to generate second intermediate data includes: performing face detection on the original image to obtain Human face position information; or performing facial feature detection on the original image to obtain facial feature position information; or performing facial blemish detection on the original image to obtain position information of the facial blemish.
- the performing the second processing on the original image to generate second intermediate data includes: obtaining face information in the original image, and obtaining information about the original image according to the face information Five sense organs information; wherein, the face information includes face position information and face size information.
- the detecting the facial features of the original image to obtain the position information of the facial features includes: acquiring the facial position information in the original image, and sending the facial position information to the first Processing unit; receiving the face area image returned by the first processing unit according to the face position information, the face area image being intercepted from the original image; performing facial features detection on the face area image to obtain Location information of the five sense organs.
- the original image is an image frame in a video stream; the obtaining the face position information in the original image includes: performing face detection on the original image to obtain the person Face position information; or obtain the face position information in the previous frame of the original image in the video stream, and use the face position information in the previous frame of image as the face position in the original image information.
- the first rendering process includes at least one of a skin refining process and a skin tone process; and/or the second rendering process includes at least one of face shape processing, facial features processing, and concealer processing .
- the method in the embodiment of the present disclosure may be executed by a second processing unit.
- a second processing unit please refer to the embodiment of the second processing unit in the foregoing image processing method, which will not be repeated here.
- the embodiment of the present disclosure further provides an image processing device, which is applied to a first processing unit, and is used to perform rendering processing on an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel, including a memory and a processor.
- a computer program that is stored in a memory and can run on a processor, and the processor implements the following steps when the processor executes the program:
- a rendered image is generated according to the first intermediate data and the second intermediate data.
- the embodiment of the present disclosure also provides an image processing device, which is applied to a second processing unit, and is used to perform rendering processing on an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel, including a memory and a processor. And a computer program that is stored in a memory and can run on a processor, and the processor implements the following steps when the processor executes the program:
- While the first processing unit performs the first processing on the original image to generate first intermediate data, simultaneously performing the second processing on the original image to generate second intermediate data;
- the above-mentioned image processing device may be a processing chip, for example, a SoC chip, or a computer device.
- FIG. 9 shows a schematic diagram of the hardware structure of a more specific image processing device provided by an embodiment of this specification.
- the device may include: a processor 901, a memory 902, an input/output interface 903, a communication interface 804, and a bus 905 .
- the processor 901, the memory 902, the input/output interface 903, and the communication interface 904 realize the communication connection between each other in the device through the bus 905.
- the processor 901 can be implemented by a general CPU (Central Processing Unit, central processing unit), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for execution related Program to realize the technical solutions provided in the embodiments of this specification.
- CPU Central Processing Unit
- ASIC Application Specific Integrated Circuit
- the memory 902 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory), static storage device, dynamic storage device, etc.
- the memory 902 may store an operating system and other application programs.
- related program codes are stored in the memory 902 and called and executed by the processor 901.
- the input/output interface 903 is used to connect an input/output module to realize information input and output.
- the input/output/module can be configured in the device as a component (not shown in the figure), or can be connected to the device to provide corresponding functions.
- the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and an output device may include a display, a speaker, a vibrator, an indicator light, and the like.
- the communication interface 904 is used to connect a communication module (not shown in the figure) to realize the communication interaction between the device and other devices.
- the communication module can realize communication through wired means (such as USB, network cable, etc.), or through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
- the bus 905 includes a path to transmit information between various components of the device (for example, the processor 901, the memory 902, the input/output interface 903, and the communication interface 904).
- the device may also include the equipment necessary for normal operation. Other components.
- the above-mentioned device may also include only the components necessary to implement the solutions of the embodiments of the present specification, and not necessarily include all the components shown in the figures.
- an embodiment of the present disclosure also provides an image processing system for rendering an original image.
- the rendering processing includes a first processing and a second processing that can be processed in parallel, and the system includes:
- the second processing unit 1002 is configured to perform the second processing on the original image to generate second intermediate data while the first processing unit 1001 performs the first processing on the original image, and perform the second processing on the original image. After completion, send the second intermediate data to the first processing unit 1001;
- the first processing unit 1001 is configured to perform first processing on the original image to generate first intermediate data, and after the first processing is completed, generate a rendered image according to the first intermediate data and the second intermediate data .
- the embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method described in any of the embodiments is implemented.
- the embodiments of the present disclosure may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
- Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be achieved by any method or technology.
- the information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory technology
- CD-ROM compact disc
- DVD digital versatile disc
- Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé, un appareil et un système de traitement d'images, et un support d'enregistrement lisible par ordinateur. Le procédé consiste à : pendant qu'une première unité de traitement exécute un premier traitement sur une image d'origine pour générer des premières données intermédiaires, exécuter, par une seconde unité de traitement, un second traitement sur l'image d'origine pour générer des secondes données intermédiaires; et après achèvement du premier traitement, générer, par la première unité de traitement, une image de rendu selon les premières données intermédiaires et les secondes données intermédiaires. Étant donné que le premier traitement et le second traitement sont exécutés en parallèle, le temps d'attente d'une première unité de traitement qui attend des secondes données intermédiaires est réduit, et l'efficacité de traitement d'images est ainsi améliorée.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093519 WO2021237736A1 (fr) | 2020-05-29 | 2020-05-29 | Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093519 WO2021237736A1 (fr) | 2020-05-29 | 2020-05-29 | Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021237736A1 true WO2021237736A1 (fr) | 2021-12-02 |
Family
ID=78745407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093519 WO2021237736A1 (fr) | 2020-05-29 | 2020-05-29 | Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021237736A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114501141A (zh) * | 2022-01-04 | 2022-05-13 | 杭州网易智企科技有限公司 | 视频数据处理方法、装置、设备和介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140292774A1 (en) * | 2013-03-26 | 2014-10-02 | Nvidia Corporation | System and method for performing sample-based rendering in a parallel processor |
CN105894480A (zh) * | 2016-05-03 | 2016-08-24 | 成都索贝数码科技股份有限公司 | 一种高效且易于并行实现的美颜装置 |
CN106534667A (zh) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | 分布式协同渲染方法及终端 |
CN106600521A (zh) * | 2016-11-30 | 2017-04-26 | 宇龙计算机通信科技(深圳)有限公司 | 一种图像处理方法及终端设备 |
CN107392110A (zh) * | 2017-06-27 | 2017-11-24 | 五邑大学 | 基于互联网的人脸美化系统 |
CN108133453A (zh) * | 2017-12-13 | 2018-06-08 | 北京奇虎科技有限公司 | 一种基于OpenGL的图像处理器及其功能扩展方法 |
CN110852934A (zh) * | 2018-08-21 | 2020-02-28 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、图像设备及存储介质 |
-
2020
- 2020-05-29 WO PCT/CN2020/093519 patent/WO2021237736A1/fr active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140292774A1 (en) * | 2013-03-26 | 2014-10-02 | Nvidia Corporation | System and method for performing sample-based rendering in a parallel processor |
CN105894480A (zh) * | 2016-05-03 | 2016-08-24 | 成都索贝数码科技股份有限公司 | 一种高效且易于并行实现的美颜装置 |
CN106534667A (zh) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | 分布式协同渲染方法及终端 |
CN106600521A (zh) * | 2016-11-30 | 2017-04-26 | 宇龙计算机通信科技(深圳)有限公司 | 一种图像处理方法及终端设备 |
CN107392110A (zh) * | 2017-06-27 | 2017-11-24 | 五邑大学 | 基于互联网的人脸美化系统 |
CN108133453A (zh) * | 2017-12-13 | 2018-06-08 | 北京奇虎科技有限公司 | 一种基于OpenGL的图像处理器及其功能扩展方法 |
CN110852934A (zh) * | 2018-08-21 | 2020-02-28 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、图像设备及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114501141A (zh) * | 2022-01-04 | 2022-05-13 | 杭州网易智企科技有限公司 | 视频数据处理方法、装置、设备和介质 |
CN114501141B (zh) * | 2022-01-04 | 2024-02-02 | 杭州网易智企科技有限公司 | 视频数据处理方法、装置、设备和介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115699114B (zh) | 用于分析的图像增广的方法和装置 | |
JP7556161B2 (ja) | エクステンデッドリアリティ環境における画像キャプチャ | |
JP6688277B2 (ja) | プログラム、学習処理方法、学習モデル、データ構造、学習装置、および物体認識装置 | |
US20120212573A1 (en) | Method, terminal and computer-readable recording medium for generating panoramic images | |
US20180240244A1 (en) | High-fidelity 3d reconstruction using facial features lookup and skeletal poses in voxel models | |
US12056842B2 (en) | Hemisphere cube map projection format in imaging environments | |
US8274567B2 (en) | Image processing method, apparatus and system | |
WO2018098677A1 (fr) | Procédé et terminal de traitement de flux vidéo | |
US20110242115A1 (en) | Method for performing image signal processing with aid of a graphics processing unit, and associated apparatus | |
CN111107427B (zh) | 图像处理的方法及相关产品 | |
US20220207917A1 (en) | Facial expression image processing method and apparatus, and electronic device | |
WO2024179319A1 (fr) | Procédé et appareil de traitement d'images et dispositif électronique | |
WO2021237736A1 (fr) | Procédé, appareil et système de traitement d'images, et support d'enregistrement lisible par ordinateur | |
CN113627328A (zh) | 电子设备及其图像识别方法、片上系统和介质 | |
US8971636B2 (en) | Image creating device, image creating method and recording medium | |
US9117110B2 (en) | Face detection-processing circuit and image pickup device including the same | |
WO2022133954A1 (fr) | Procédé, appareil et système de rendu d'images et support de stockage lisible par ordinateur | |
US9600735B2 (en) | Image processing device, image processing method, program recording medium | |
CN113837019B (zh) | 一种化妆进度检测方法、装置、设备及存储介质 | |
US20130315554A1 (en) | Method for determining the movements of an object from a stream of images | |
CN109493349B (zh) | 一种图像特征处理模块、增强现实设备和角点检测方法 | |
CN107977644B (zh) | 基于图像采集设备的图像数据处理方法及装置、计算设备 | |
CN112752086A (zh) | 用于环境映射的图像信号处理器、方法和系统 | |
JP2020166653A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
US20230319392A1 (en) | Recommendations for image capture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937749 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937749 Country of ref document: EP Kind code of ref document: A1 |