CN116739884B - Calculation method based on cooperation of CPU and GPU - Google Patents

Calculation method based on cooperation of CPU and GPU Download PDF

Info

Publication number
CN116739884B
CN116739884B CN202311027658.7A CN202311027658A CN116739884B CN 116739884 B CN116739884 B CN 116739884B CN 202311027658 A CN202311027658 A CN 202311027658A CN 116739884 B CN116739884 B CN 116739884B
Authority
CN
China
Prior art keywords
image
gpu
cpu
processed
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311027658.7A
Other languages
Chinese (zh)
Other versions
CN116739884A (en
Inventor
李健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Blue Yun Polytron Technologies Inc
Original Assignee
Beijing Blue Yun Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Blue Yun Polytron Technologies Inc filed Critical Beijing Blue Yun Polytron Technologies Inc
Priority to CN202311027658.7A priority Critical patent/CN116739884B/en
Publication of CN116739884A publication Critical patent/CN116739884A/en
Application granted granted Critical
Publication of CN116739884B publication Critical patent/CN116739884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a calculation method based on cooperation of a CPU and a GPU, and relates to the technical field of graphic data processing. The method includes receiving a target video, creating a data volume histogram based on an image track; clustering time points according to the data volume histogram, preprocessing an image frame according to a clustering result, and selecting an image to be processed from the clustering result based on the occupancy rate of the CPU and the GPU; sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back to the CPU; in the CPU, image processing is performed on the same kind of image based on the processing flow. According to the application, the CPU is used for preprocessing the image set, a plurality of non-repeated images are selected in advance, the non-repeated images are processed by the GPU, the GPU compares the front image with the rear image after the processing is completed, the processing flow is recorded, and the processing flow is fed back to the CPU for processing other repeated images, so that the working pressure of the GPU is reduced, and the matching process is optimized.

Description

Calculation method based on cooperation of CPU and GPU
Technical Field
The application relates to the technical field of graphic data processing, in particular to a calculation method based on cooperation of a CPU and a GPU.
Background
Graphics processors (English: graphics processing unit, abbreviated: GPU), also known as display cores, vision processors, display chips, are microprocessors that are dedicated to performing image and graphics related operations on personal computers, workstations, gaming machines, and some mobile devices (e.g., tablet computers, smartphones, etc.). The GPU reduces the dependency of the graphics card on the CPU and performs part of the original CPU, and particularly, the core technology adopted by the GPU in 3D graphics processing includes hardware T & L (geometric transformation and illumination processing), cubic environment texture mapping and vertex blending, texture compression and concave-convex mapping, dual texture four-pixel 256-bit rendering engine, and the like, where the hardware T & L technology can be said to be a flag of the GPU.
Conventionally, a GPU is a module specially used for image processing, and can be applied to the field of video processing, however, the coordination process of the existing GPU and a CPU is slightly simple, when the CPU detects an image, the GPU directly sends the image to the GPU for processing, the GPU feeds back a processing result, and the CPU performs statistics.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the application provides a calculation method based on cooperation of a CPU and a GPU, which solves the technical problem that how to carry out importance identification on the feedback problem of merchants and reasonably allocate limited processing resources.
(II) technical scheme
In order to achieve the above purpose, the application is realized by the following technical scheme:
a computing method based on cooperation of a CPU and a GPU, the method comprising:
receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track;
clustering time points according to the data volume histogram, preprocessing an image frame according to a clustering result, and selecting an image to be processed from the clustering result based on the occupancy rate of the CPU and the GPU; the preprocessing process is completed by a CPU;
sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back to the CPU;
in the CPU, image processing is performed on the same kind of image based on the processing flow.
As a further limitation of the technical solution of the embodiment of the present application, the steps of receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track include:
receiving a target video, and extracting an image track in the target video to obtain an image set; the composition unit of the image set is an image frame taking the frame number as a label;
calculating the size of the image according to a preset calculation formula;
creating a data volume histogram by taking the frame number as an abscissa and the image size as an ordinate;
the calculation formula of the image size is as follows: s=f×w/8;
wherein S is the image size, F is the number of pixels for eliminating repeated color values, W is the bit depth, the bit depth of the gray image is 8 bits, and the position of the RGB image is 24 bits.
As a further limitation of the technical solution of the embodiment of the present application, the step of clustering the time points according to the data volume histogram, preprocessing the image frames according to the clustering result, and selecting the image to be processed in the clustering result based on the occupancy rate of the CPU and the GPU includes:
calculating the data quantity difference of each frame of image according to the data quantity histogram;
counting data quantity difference according to a preset time unit, and merging time units according to the average value of the data quantity difference to obtain a clustering result;
randomly determining a preset number of sampling positions, and reading point position values of each image frame at the sampling positions; the point position value is a color parameter of the pixel point;
the occupancy rates of the CPU and the GPU are read, and the rated processing quantity and the point position value conditions are determined according to the occupancy rates;
and selecting an image to be processed in the image frame based on the point value condition.
As a further limitation of the technical solution of the embodiment of the present application, the step of randomly determining a preset number of sampling positions, and reading the point position value of each image frame at the sampling position includes:
randomly determining a preset number of sampling positions; the sampling position is a coordinate;
reading a clustering result, and positioning pixel points of each image frame at sampling positions in the similar images;
and reading the color value of the pixel point, and inputting the color value into a preset merging formula to obtain a point position value.
As a further limitation of the technical solution of the embodiment of the present application, the step of reading the occupancy rates of the CPU and the GPU and determining the condition of the rated processing number and the point location value according to the occupancy rates includes:
the occupancy rates of the CPU and the GPU are read, and rated processing quantity is read from a preset quantity table according to the occupancy rates;
determining point value conditions according to the rated processing quantity; the point position value condition is the ratio of the point position value to the mode value; the point value condition is inversely proportional to the nominal treatment quantity.
As a further limitation of the technical solution of the embodiment of the present application, the step of selecting the image to be processed in the image frame based on the point location value condition includes:
reading all point position values at each sampling position according to a preset sequence;
acquiring the mode value of the point position values, and calculating the deviation rate of all the point position values according to the mode value;
comparing the deviation rate with the point value condition, and when the deviation rate reaches the point value condition, automatically increasing the selection times of the corresponding image frames;
and when the selection times reach a preset time threshold, selecting the image frame as an image to be processed.
As a further limitation of the technical solution of the embodiment of the present application, the step of obtaining a clustering result according to the preset time unit statistical data amount difference and the mean merging time unit of the data amount difference includes:
counting data quantity difference according to a preset time unit;
calculating the average value of the data quantity difference value;
traversing all data quantity differences according to the average value, and marking jump image frames according to the traversing result;
and updating the clustering result by taking the jump image frame as an endpoint.
As a further limitation of the technical solution of the embodiment of the present application, the steps of sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back the processing flow to the CPU include:
sending the image to be processed to the GPU;
performing image processing on the image to be processed based on the GPU;
comparing the image to be processed after the image processing with the original image to be processed, and outputting an additional image layer;
and sending the image to be processed after the package image processing and the additional image layer to a CPU.
As a further limitation of the technical solution of the embodiment of the present application, the step of performing, in the CPU, image processing on the similar image based on the processing flow includes:
receiving an image to be processed after image processing and an additional image layer, and reading similar images in a clustering result by taking the image to be processed after image processing as a center; the similar images are images between two adjacent images to be processed;
and carrying out superposition processing on the similar images according to the additional layers.
(III) beneficial effects
The application provides a calculation method based on cooperation of a CPU and a GPU. Compared with the prior art, the method has the following beneficial effects: according to the application, the CPU is used for preprocessing the image set, a plurality of non-repeated images are selected in advance, the non-repeated images are processed by the GPU, the GPU compares the front image with the rear image after the processing is completed, the processing flow is recorded, and the processing flow is fed back to the CPU for processing other repeated images, so that the working pressure of the GPU is reduced, and the matching process is optimized.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block flow diagram of a computing method based on cooperation of a CPU and a GPU.
Fig. 2 is a block flow diagram of step S100 in a computing method based on cooperation of a CPU and a GPU.
Fig. 3 is a flowchart of step S200 in the calculation method based on the cooperation of the CPU and the GPU.
Fig. 4 is a flowchart of step S300 in the calculation method based on the cooperation of the CPU and the GPU.
Fig. 5 is a flowchart of step S400 in the calculation method based on the cooperation of the CPU and the GPU.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application are clearly and completely described, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a block diagram of a composition structure of a computing method based on cooperation of a CPU and a GPU, and in an embodiment of the present application, a computing method based on cooperation of a CPU and a GPU includes:
step S100: receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track;
the target video is a video to be analyzed, the video is composed of an image track and an audio track, and the technical scheme only analyzes the image part, so that the image track in the target video is extracted before processing, all images in the image track are analyzed, and a data structure reflecting the data quantity, namely a data quantity histogram, can be obtained.
Step S200: clustering time points according to the data volume histogram, preprocessing an image frame according to a clustering result, and selecting an image to be processed from the clustering result based on the occupancy rate of the CPU and the GPU; the preprocessing process is completed by a CPU;
the data volume histogram can reflect the difference between the images, when the difference between the adjacent images is smaller, the images are regarded as the same type, the clustering process is continuously circulated, the images can be divided into a plurality of image sets, and the images in the same image set can be processed by adopting the same processing flow.
It is worth mentioning that the clustering process of the image set is adjustable, the more severe the clustering condition, the more the clustering classes, the more images need GPU processing, and the larger the amount of consumed resources.
Step S300: sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back to the CPU;
the method comprises the steps that an image to be processed is sent to a GPU, the GPU is a graphic processor, also called a display core, a visual processor and a display chip, is a microprocessor which is specially used for performing image and graphic related operation on a personal computer, a workstation, a game machine and some mobile equipment (such as a tablet computer, a smart phone and the like), the GPU processes data of the image to be processed, the data processing process can be very fine, and a processing flow is output after the fine processing process is finished, and can be understood as a filter; then, the CPU superimposes the filter on the same type of image.
In the process, the working gravity center of the GPU is completely in the refinement processing of a small number of images, and the CPU applies the processing process to other similar images; specifically, the similar images are determined by the clustering process, the similar images are generally images with higher similarity, each picture can last for a few seconds in a long video, a plurality of images exist in the few seconds, and a processing mode is adopted for the images.
It is necessary to describe an extreme case, when all images are self-classified, then the GPU recognizes each image, and the recognition effect is the best, but the resources are the most consumed.
Step S400: in the CPU, image processing is carried out on the similar images based on the processing flow;
after the GPU processes the images, the processing flow is fed back to the CPU, and at the moment, the CPU processes the images of the same type based on the processing flow.
Fig. 2 is a flowchart of step S100 in a calculation method based on cooperation of a CPU and a GPU, where the steps of receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track include:
step S101: receiving a target video, and extracting an image track in the target video to obtain an image set; the composition unit of the image set is an image frame taking the frame number as a label;
step S102: calculating the size of the image according to a preset calculation formula;
step S103: creating a data volume histogram by taking the frame number as an abscissa and the image size as an ordinate;
the calculation formula of the image size is as follows: s=f×w/8;
wherein S is the image size, F is the number of pixels for eliminating repeated color values, W is the bit depth, the bit depth of the gray image is 8 bits, and the position of the RGB image is 24 bits.
The above provides a construction scheme of a data volume histogram, after receiving a target video, images in the target video are extracted, the sizes of the images are calculated, and then the sizes of the images are arranged according to the sequence of the images, so that the data volume histogram is obtained.
Fig. 3 is a flowchart of step S200 in a calculation method based on cooperation of a CPU and a GPU, where the step of clustering time points according to the data volume histogram, preprocessing an image frame according to a clustering result, and selecting an image to be processed in the clustering result based on occupancy rates of the CPU and the GPU includes:
step S201: calculating the data quantity difference of each frame of image according to the data quantity histogram;
step S202: counting data quantity difference according to a preset time unit, and merging time units according to the average value of the data quantity difference to obtain a clustering result;
step S203: randomly determining a preset number of sampling positions, and reading point position values of each image frame at the sampling positions; the point position value is a color parameter of the pixel point;
step S204: the occupancy rates of the CPU and the GPU are read, and the rated processing quantity and the point position value conditions are determined according to the occupancy rates;
step S205: and selecting an image to be processed in the image frame based on the point value condition.
The above-mentioned contents specifically describe the selection process of the image to be processed, and calculate the data volume difference of each frame of image through the data volume histogram, where the data volume difference is equivalent to the derivative of discrete form and is used to reflect the size change condition of the adjacent images, if the data volume difference is smaller, the two images are considered to be identical, and the identical images are classified into one type.
And (3) the occupancy rates of the CPU and the GPU are read in real time, the number of the images to be processed is determined according to the occupancy rates of the CPU and the GPU, and the images to be processed with the corresponding number are selected from each type of images.
As a preferred embodiment of the present application, the step of randomly determining a preset number of sampling positions, and reading the dot position value of each image frame at the sampling positions includes:
randomly determining a preset number of sampling positions; the sampling position is a coordinate;
reading a clustering result, and positioning pixel points of each image frame at sampling positions in the similar images;
and reading the color value of the pixel point, and inputting the color value into a preset merging formula to obtain a point position value.
The above is actually a selection process, which is specific to the steps S203 to S205, and first, some sampling positions (the sampling positions are coordinates, the sizes of the images are the same, a coordinate is determined, the color value parameters of the pixels with the same coordinate are read in each image), and the color value parameters of the pixels are read in each image according to the sampling positions, and then converted into point values.
The color value parameters are RGB values, which are three parameters to be converted into single values, and a merging process adopts a merging formula, wherein the merging formula can be a gray level conversion formula, and the converted values are normalized and are called point values; if the image itself is a gray scale image, the gray scale value may be directly used as the dot value.
Further, the step of reading the occupancy rates of the CPU and the GPU and determining the condition of the rated processing number and the point location value according to the occupancy rates includes:
the occupancy rates of the CPU and the GPU are read, and rated processing quantity is read from a preset quantity table according to the occupancy rates;
determining point value conditions according to the rated processing quantity; the point position value condition is the ratio of the point position value to the mode value; the point value condition is inversely proportional to the nominal treatment quantity.
The occupation rate of the CPU and the GPU is read, the number of images which can be processed by the GPU can be determined, the time condition is generally adopted in the process of determining the number, that is, the prediction time of the CPU and the GPU for processing data needs to be minimum, for example, the time of the GPU for processing data is twice that of the CPU, then the image processed by the GPU needs to be half that processed by the CPU, and the synchronous operation of the CPU and the GPU can be ensured.
In general, the higher the rendering level of the GPU on the image, the longer the time consuming.
On the basis of the above, the point value conditions are required to be set, the point value conditions are different, and the selected schemes of the images to be processed are different.
Specifically, the step of selecting the image to be processed in the image frame based on the point value condition includes:
reading all point position values at each sampling position according to a preset sequence;
acquiring the mode value of the point position values, and calculating the deviation rate of all the point position values according to the mode value;
comparing the deviation rate with the point value condition, and when the deviation rate reaches the point value condition, automatically increasing the selection times of the corresponding image frames;
and when the selection times reach a preset time threshold, selecting the image frame as an image to be processed.
The above-mentioned content is the comprehensive process, it is the comprehensive application to calculation process of the point value and point value condition, at first, read all point values of a certain sampling position, for the similar picture, every picture is similar, a plurality of point values of the same sampling position will have apparent mode, can judge whether there is an abnormal point value according to the mode (the said abnormal point value is that the difference of this value and mode is big enough, and has reached the point value condition), at this moment, analyze a sampling position, can mark some image frames; when the number of sampling positions gradually increases, the probability that each image frame is marked and the marked times are continuously increased, and finally the times threshold is met, and at the moment, the corresponding image frame is selected as the image to be processed.
As a preferred embodiment of the technical scheme of the present application, the step of obtaining the clustering result according to the preset time unit statistical data amount difference and the mean merging time unit of the data amount difference includes:
counting data quantity difference according to a preset time unit;
calculating the average value of the data quantity difference value;
traversing all data quantity differences according to the average value, and marking jump image frames according to the traversing result;
and updating the clustering result by taking the jump image frame as an endpoint.
Based on the technical scheme of the application, the application provides an additional clustering scheme, the original scheme of the application is to cluster according to time, the time unit is generally seconds, but some images are changed without changing in the whole second, so that the image frames in one second are required to be subjected to secondary segmentation, and the clustering result is updated.
Fig. 4 is a flow chart of step S300 in a computing method based on cooperation of a CPU and a GPU, where the steps of sending an image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back the processing flow to the CPU include:
step S301: sending the image to be processed to the GPU;
step S302: performing image processing on the image to be processed based on the GPU;
step S303: comparing the image to be processed after the image processing with the original image to be processed, and outputting an additional image layer;
step S304: and sending the image to be processed after the package image processing and the additional image layer to a CPU.
The above-mentioned contents define the processing flow, and the additional layer of the technical scheme of the application is very simple, so that the additional layer can be understood as a filter, and parameters such as hue, saturation, contrast and the like of the image are adjusted.
Fig. 5 is a flowchart of step S400 in a computing method based on cooperation of a CPU and a GPU, where the steps of performing image processing on similar images based on a processing flow in the CPU include:
step S401: receiving an image to be processed after image processing and an additional image layer, and reading similar images in a clustering result by taking the image to be processed after image processing as a center; the similar images are images between two adjacent images to be processed;
step S402: and carrying out superposition processing on the similar images according to the additional layers.
After the additional layer is generated, the CPU processes the image; it should be noted that the above description refers to the concept of "similar image", which means that an additional layer defined by a certain image to be processed only processes a part of the image after (or before) it, that is, all the images between the image and the next image to be processed, because the next image to be processed has a new additional layer.
In summary, compared with the prior art, the application has the following beneficial effects:
according to the application, the CPU is used for preprocessing the image set, a plurality of non-repeated images are selected in advance, the non-repeated images are processed by the GPU, the GPU compares the front image with the rear image after the processing is completed, the processing flow is recorded, and the processing flow is fed back to the CPU for processing other repeated images, so that the working pressure of the GPU is reduced, and the matching process is optimized.
It should be noted that, from the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by means of software plus necessary general hardware platform. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments. In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A computing method based on cooperation of a CPU and a GPU, the method comprising:
receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track;
clustering time points according to the data volume histogram, preprocessing an image frame according to a clustering result, and selecting an image to be processed from the clustering result based on the occupancy rate of the CPU and the GPU; the preprocessing process is completed by a CPU;
sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back to the CPU;
in the CPU, image processing is carried out on the similar images based on the processing flow;
the step of clustering the time points according to the data volume histogram, preprocessing the image frames according to the clustering result, and selecting the images to be processed in the clustering result based on the occupancy rate of the CPU and the GPU comprises the following steps:
calculating the data quantity difference of each frame of image according to the data quantity histogram;
counting data quantity difference according to a preset time unit, and merging time units according to the average value of the data quantity difference to obtain a clustering result;
randomly determining a preset number of sampling positions, and reading point position values of each image frame at the sampling positions; the point position value is a color parameter of the pixel point;
the occupancy rates of the CPU and the GPU are read, and the rated processing quantity and the point position value conditions are determined according to the occupancy rates;
and selecting an image to be processed in the image frame based on the point value condition.
2. The method for computing based on CPU and GPU cooperation according to claim 1, wherein the steps of receiving a target video, extracting an image track in the target video, and creating a data volume histogram based on the image track comprise:
receiving a target video, and extracting an image track in the target video to obtain an image set; the composition unit of the image set is an image frame taking the frame number as a label;
calculating the size of the image according to a preset calculation formula;
creating a data volume histogram by taking the frame number as an abscissa and the image size as an ordinate;
the calculation formula of the image size is as follows: s=f×w/8;
wherein S is the image size, F is the number of pixels for eliminating repeated color values, W is the bit depth, the bit depth of the gray image is 8 bits, and the position of the RGB image is 24 bits.
3. The CPU-and-GPU-cooperation-based computing method of claim 1, wherein the step of randomly determining a preset number of sampling locations at which to read the dot-location values of each image frame comprises:
randomly determining a preset number of sampling positions; the sampling position is a coordinate;
reading a clustering result, and positioning pixel points of each image frame at sampling positions in the similar images;
and reading the color value of the pixel point, and inputting the color value into a preset merging formula to obtain a point position value.
4. The method for computing based on cooperation of a CPU and a GPU according to claim 1, wherein the step of reading the occupancy rates of the CPU and the GPU and determining the rated processing number and the point location value condition according to the occupancy rates comprises:
the occupancy rates of the CPU and the GPU are read, and rated processing quantity is read from a preset quantity table according to the occupancy rates;
determining point value conditions according to the rated processing quantity; the point position value condition is the ratio of the point position value to the mode value; the point value condition is inversely proportional to the nominal treatment quantity.
5. The method for computing based on CPU and GPU cooperation according to claim 4, wherein said selecting an image to be processed in an image frame based on a point value condition comprises:
reading all point position values at each sampling position according to a preset sequence;
acquiring the mode value of the point position values, and calculating the deviation rate of all the point position values according to the mode value;
comparing the deviation rate with the point value condition, and when the deviation rate reaches the point value condition, automatically increasing the selection times of the corresponding image frames;
and when the selection times reach a preset time threshold, selecting the image frame as an image to be processed.
6. The method for calculating cooperation between a CPU and a GPU according to claim 1, wherein the step of obtaining the clustering result according to the difference of the data amount according to the preset time unit and the mean value of the difference of the data amount includes:
counting data quantity difference according to a preset time unit;
calculating the average value of the data quantity difference value;
traversing all data quantity differences according to the average value, and marking jump image frames according to the traversing result;
and updating the clustering result by taking the jump image frame as an endpoint.
7. The method for computing based on cooperation of a CPU and a GPU according to claim 1, wherein the step of sending the image to be processed to the GPU, performing image processing on the image to be processed based on the GPU, recording a processing flow, and feeding back the processing flow to the CPU includes:
sending the image to be processed to the GPU;
performing image processing on the image to be processed based on the GPU;
comparing the image to be processed after the image processing with the original image to be processed, and outputting an additional image layer;
and sending the image to be processed after the package image processing and the additional image layer to a CPU.
8. The method for computing a class of images based on cooperation of a CPU and a GPU according to claim 1, wherein the step of performing image processing on the class of images based on a process flow in the CPU comprises:
receiving an image to be processed after image processing and an additional image layer, and reading similar images in a clustering result by taking the image to be processed after image processing as a center; the similar images are images between two adjacent images to be processed;
and carrying out superposition processing on the similar images according to the additional layers.
CN202311027658.7A 2023-08-16 2023-08-16 Calculation method based on cooperation of CPU and GPU Active CN116739884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311027658.7A CN116739884B (en) 2023-08-16 2023-08-16 Calculation method based on cooperation of CPU and GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311027658.7A CN116739884B (en) 2023-08-16 2023-08-16 Calculation method based on cooperation of CPU and GPU

Publications (2)

Publication Number Publication Date
CN116739884A CN116739884A (en) 2023-09-12
CN116739884B true CN116739884B (en) 2023-11-03

Family

ID=87917280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311027658.7A Active CN116739884B (en) 2023-08-16 2023-08-16 Calculation method based on cooperation of CPU and GPU

Country Status (1)

Country Link
CN (1) CN116739884B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314728B (en) * 2023-11-29 2024-03-12 深圳市七彩虹禹贡科技发展有限公司 GPU operation regulation and control method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964911A (en) * 2010-10-09 2011-02-02 浙江大学 Ground power unit (GPU)-based video layering method
CN102647588A (en) * 2011-02-17 2012-08-22 北京大学深圳研究生院 GPU (Graphics Processing Unit) acceleration method used for hierarchical searching motion estimation
CN105550974A (en) * 2015-12-13 2016-05-04 复旦大学 GPU-based acceleration method of image feature extraction algorithm
CN113076190A (en) * 2021-02-23 2021-07-06 北京蓝耘科技股份有限公司 Computing method based on cooperation of CPU and GPU

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706633B2 (en) * 2004-04-21 2010-04-27 Siemens Corporation GPU-based image manipulation method for registration applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964911A (en) * 2010-10-09 2011-02-02 浙江大学 Ground power unit (GPU)-based video layering method
CN102647588A (en) * 2011-02-17 2012-08-22 北京大学深圳研究生院 GPU (Graphics Processing Unit) acceleration method used for hierarchical searching motion estimation
CN105550974A (en) * 2015-12-13 2016-05-04 复旦大学 GPU-based acceleration method of image feature extraction algorithm
CN113076190A (en) * 2021-02-23 2021-07-06 北京蓝耘科技股份有限公司 Computing method based on cooperation of CPU and GPU

Also Published As

Publication number Publication date
CN116739884A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US11275961B2 (en) Character image processing method and apparatus, device, and storage medium
WO2020038128A1 (en) Video processing method and device, electronic device and computer readable medium
CN109800698B (en) Icon detection method based on deep learning, icon detection system and storage medium
CN116739884B (en) Calculation method based on cooperation of CPU and GPU
US11030495B2 (en) Systems and methods for instance segmentation
WO2018133825A1 (en) Method for processing video images in video call, terminal device, server, and storage medium
US9092668B2 (en) Identifying picture areas based on gradient image analysis
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN108182457B (en) Method and apparatus for generating information
CN113822817A (en) Document image enhancement method and device and electronic equipment
CN114723636A (en) Model generation method, device, equipment and storage medium based on multi-feature fusion
CN113486881B (en) Text recognition method, device, equipment and medium
US11881044B2 (en) Method and apparatus for processing image, device and storage medium
CN113313066A (en) Image recognition method, image recognition device, storage medium and terminal
CN106056575B (en) A kind of image matching method based on like physical property proposed algorithm
CN115620321B (en) Table identification method and device, electronic equipment and storage medium
Wu et al. A hybrid image retargeting approach via combining seam carving and grid warping
CN113645484B (en) Data visualization accelerated rendering method based on graphic processor
JP4967045B2 (en) Background discriminating apparatus, method and program
CN115205163A (en) Method, device and equipment for processing identification image and storage medium
CN113947146A (en) Sample data generation method, model training method, image detection method and device
CN114399497A (en) Text image quality detection method and device, computer equipment and storage medium
CN113112567A (en) Method and device for generating editable flow chart, electronic equipment and storage medium
CN111738903B (en) Method, device and equipment for optimizing layered material of object
CN114708592B (en) Seal security level judging method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant