WO2024094086A1 - 图像处理方法、装置、设备、介质及产品 - Google Patents

图像处理方法、装置、设备、介质及产品 Download PDF

Info

Publication number
WO2024094086A1
WO2024094086A1 PCT/CN2023/129171 CN2023129171W WO2024094086A1 WO 2024094086 A1 WO2024094086 A1 WO 2024094086A1 CN 2023129171 W CN2023129171 W CN 2023129171W WO 2024094086 A1 WO2024094086 A1 WO 2024094086A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
algorithm
target
node
image processing
Prior art date
Application number
PCT/CN2023/129171
Other languages
English (en)
French (fr)
Inventor
占涵冰
赵洪达
刘莹
Original Assignee
抖音视界有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 抖音视界有限公司 filed Critical 抖音视界有限公司
Publication of WO2024094086A1 publication Critical patent/WO2024094086A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular, to an image processing method, apparatus, device, medium, and product.
  • an automatic image processing link can be set up to process and apply the image.
  • a user can upload an image, and the image uploaded by the user is processed by the image processing link to obtain the final image processing result, and the image processing result is displayed to the user.
  • a user can upload a landscape image, and the landscape image is processed through image processing links such as face area recognition, face area image cropping, and face area image identity recognition to obtain the final identity recognition result.
  • the embodiments of the present disclosure provide an image processing method, apparatus, device, medium and product to overcome the problem that the image processing accuracy is not high when all images use the image processing link.
  • an embodiment of the present disclosure provides an image processing method, including:
  • an image processing device including:
  • An image acquisition unit configured to obtain an image to be processed in response to an image processing request
  • An image parsing unit configured to parse the image to be processed from at least one parsing dimension to obtain at least one parsing result of the image to be processed;
  • an algorithm selection unit configured to select a target processing algorithm for the image to be processed according to the at least one analysis result
  • the image processing unit is used to perform corresponding image processing on the image to be processed based on the target processing algorithm to obtain an image processing result.
  • an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
  • the memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory, so that the at least one processor performs the image processing method described in the first aspect and various possible designs of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium, in which computer execution instructions are stored.
  • a processor executes the computer execution instructions, the image processing method described in the first aspect and various possible designs of the first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the image processing method as described in the first aspect and various possible designs of the first aspect.
  • FIG1 is a diagram showing an application example of an image processing method provided by an embodiment of the present disclosure
  • FIG2 is a flow chart of an image processing method provided by an embodiment of the present disclosure.
  • FIG3 is another flow chart of an image processing method provided by an embodiment of the present disclosure.
  • FIG4 is another embodiment of an image processing method provided by an embodiment of the present disclosure.
  • FIG5 is another schematic flow chart of an image processing method provided by an embodiment of the present disclosure.
  • FIG6 is an example diagram of an algorithm decision tree provided by an embodiment of the present disclosure.
  • FIG7 is a schematic diagram of the structure of an image processing device provided by an embodiment of the present disclosure.
  • FIG8 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present disclosure.
  • the technical solution disclosed in the present invention can be applied to real-time image processing scenarios. By parsing the image to be processed acquired in real time from various analysis dimensions and using the results of the analysis to select the image processing algorithm, it is possible to automatically select the processing algorithm for the image and improve the accuracy and precision of the image algorithm selection.
  • the image processing process generally includes: the user can upload an image, the image uploaded by the user is processed by the image processing chain, the final image processing result is obtained, and the image processing result is displayed to the user.
  • the user can upload a landscape image, and the landscape image is processed through image processing chains such as face region recognition, face region image cropping, and face region image identity recognition to obtain the final identity recognition result.
  • the image processing chain may include multiple image processing algorithms, and usually the algorithms of each processing chain are fixed, and the training process is also obtained by using images matching the design background, and cannot be applied to other chains. Therefore, if any image is used to be input into an image processing chain, if the background of the image is different from the background of the image processing chain, it may cause image processing recognition, and the image processing accuracy is not high.
  • the technical solution disclosed in the present invention adopts an image analysis method to obtain the analysis results of the image in different dimensions, such as clarity, background, brightness, etc.
  • an algorithm is adaptively selected for the image, rather than using a fixed image processing link to process the collected image, so that the image has a wider application range and higher processing efficiency.
  • the image to be processed can be parsed from at least one parsing dimension, and at least one parsing result can be obtained.
  • a target processing algorithm can be selected for the image to be processed through at least one parsing result. The selection of the target processing algorithm depends on the parsing result of the image to be processed, and adaptive algorithm selection of the image to be processed is achieved. Through the algorithm selection, the processing efficiency and accuracy of the image can be improved.
  • the target processing algorithm can be used to perform corresponding image improvement processing on the image to be processed to obtain an improved target image.
  • This image processing method relies on the image itself to realize the selection of the target processing algorithm, improves the adaptability of the selected target processing algorithm and the image to be processed, and when the image to be processed is processed using the target processing algorithm, personalized processing of the image to be processed is achieved, which can improve the processing efficiency and accuracy of the image to be processed.
  • FIG1 is a diagram of an application network architecture according to the image processing method disclosed herein.
  • the application network architecture according to the embodiment of the present disclosure may include an electronic device and a client connected to the electronic device via a local area network or a wide area network. It is assumed that the electronic device may be a personal computer, an ordinary server, a super personal computer, a cloud server or other type of server.
  • the present disclosure does not make too many restrictions on the specific type of the electronic device.
  • the client may be, for example, a mobile phone, a tablet computer, a personal computer, a smart home appliance, a wearable device or other terminal device. The present disclosure does not make too many restrictions on the specific type of the client.
  • the electronic device is a cloud server 1
  • the client 2 can be one or more devices such as a mobile phone 21 and a tablet computer 22.
  • Any client 2 can initiate an image processing request to the cloud server 1 and provide the image to be processed to the server 1.
  • the cloud server 1 can respond to the image processing request. Request, obtain the image to be processed, and select the processing algorithm of the image to be processed according to the dimensional analysis result of the image to be processed, and obtain the target processing algorithm, which can be used to improve the processing of the image to be processed and obtain the improved target image.
  • the algorithm By selecting the algorithm through the analysis result of the target image, the adaptive selection of the image can be realized, and the selection efficiency and accuracy of the image can be improved.
  • FIG. 1 is a flowchart of an embodiment of an image processing method provided by an embodiment of the present disclosure.
  • the image processing method may include the following steps:
  • the image processing request may be initiated by a user.
  • an image processing page may be provided, and the image processing page may include an image upload control or a video upload control.
  • the electronic device may detect that the user triggers the image upload control or the video upload control, may determine that the user initiates the image processing request, and obtains the image or video uploaded by the user, so as to obtain the image to be processed from the image or video uploaded by the user.
  • the images to be processed may include: a single person image, multiple person images, different types of commodity images, images taken in natural scenes, etc. Since the images to be processed belong to many categories, and the image processing purposes and angles of different categories are different, directly inputting the images into the set image processing model in the related art may result in low image processing accuracy. Therefore, in this embodiment, an adaptive method is adopted to perform image analysis on the images to be processed, so as to obtain the corresponding image processing algorithm for the images using the image analysis results.
  • the image type of the image to be processed is unknown.
  • the image to be processed may belong to any of a plurality of image types.
  • the image type may refer to any of the types such as the number type set according to the number of people in the image, the image type corresponding to the object contained in the image, the classification of the image background, etc., such as vehicle background, building background, natural scenery background, solid color background, etc. In this embodiment, there are no excessive restrictions on the type to which the image belongs.
  • the processing algorithms for different image processing types may be different.
  • the to-be-processed images may be analyzed at different analysis dimensions.
  • the analytical dimension may refer to the angle of image analysis for the image to be processed, that is, the content of the image analysis and the requirements for the analysis results may be defined through the analytical dimension.
  • the analysis result may be a result obtained by analyzing the image to be processed in a certain analysis dimension.
  • the analysis dimension is a brightness dimension
  • the analysis result may be an image brightness value obtained by the analysis.
  • the parsing dimensions may include one or more dimensions, which can be set according to actual usage requirements. For example, for simple image processing scenarios, such as face recognition scenarios, the number of portraits and brightness can be used as parsing dimensions. For image beautification scenarios, brightness, resolution, noise and other dimensions can be used as parsing dimensions.
  • the image to be processed may be analyzed from at least one analysis dimension to obtain analysis results of each analysis dimension.
  • the at least one analysis result may include analysis results corresponding to at least one analysis dimension.
  • the at least one analysis result may also be a valid analysis result among the analysis results corresponding to at least one analysis dimension.
  • the valid analysis result may be a valid analysis result obtained by judging the validity of the analysis results of the corresponding analysis dimension.
  • the target processing algorithm may be an image processing algorithm for performing image processing on the image to be processed.
  • the image processing algorithm may include at least one.
  • the target processing algorithm can be obtained by using at least one analytical result to perform result analysis and algorithm selection. Through at least one analytical result, the target processing algorithm required for the image to be processed can be accurately obtained, thereby achieving efficient use of the analytical results of the image to be processed.
  • the target processing algorithm may include at least one, and the target processing algorithm may be used to perform corresponding image improvement processing on the image to be processed as a whole to obtain an improved target image.
  • the target image can be output data obtained by processing the input data by the last processing algorithm in the target processing algorithm.
  • the target image can be input as input data to the first processing algorithm in the target processing algorithm.
  • Each processing algorithm is executed sequentially, and the output data of the previous processing algorithm can be used as input data of the next processing algorithm.
  • the image to be processed is parsed from at least one parsing dimension to obtain at least one parsing result.
  • the target processing algorithm can be selected for the image to be processed based on the at least one parsing result.
  • the selection of the target processing algorithm depends on the parsing result of the image to be processed, and an adaptive algorithm selection is achieved for the image to be processed.
  • the image processing efficiency and accuracy can be improved through the algorithm selection.
  • the target processing algorithm can be used to perform the corresponding The corresponding image is improved and processed to obtain the improved target image.
  • an image analysis model may be set, and the image to be processed may be analyzed from different dimensions through various analysis models to obtain analysis results of each dimension.
  • the image to be processed may be analyzed from at least one analysis dimension to obtain at least one analysis result of the image to be processed, which may include:
  • the image to be processed is respectively input into the image analysis model of each analysis dimension for analysis calculation, and the analysis results corresponding to the image to be processed in at least one analysis dimension are obtained, thereby obtaining at least one analysis result of the image to be processed.
  • the image analysis model may include at least one of an image quality analysis model, an image meta-information analysis model, an object recognition model, a scene judgment model, a copyright identification model, and the like.
  • the image quality analysis model can be used to detect parameters related to the quality of the image.
  • the image quality analysis model can include: at least one of a VQScore (Video Quality Score) model, a noise detection model, a brightness detection model, a color detection model, a contrast detection model, and an aesthetic detection model.
  • VQScore Video Quality Score
  • Image meta information may refer to attribute information involved in the generation process of an image.
  • the image meta information parsing model may be used to parse certain meta information of an image.
  • the image meta information parsing model may include: at least one of a resolution detection model, a format acquisition model, a coding quality parsing model, a transparency detection model, and a theme color analysis model.
  • the object recognition model may be a model for recognizing the type of an object in an image.
  • the object recognition model may include at least one of a portrait recognition model, a vehicle recognition model, a product recognition model, and a license plate recognition model.
  • the scene judgment model can be used to detect the scene of the image acquisition.
  • the scene judgment model can include: a landscape detection model, a synthetic image detection model, and a comic image detection model.
  • the copyright identification model can be used to detect the copyright information of an image, and the copyright information can be used for subsequent algorithm selection. For example, if a watermark is detected in an image, the watermark removal model can be used as an image processing algorithm.
  • the copyright identification model can include at least one of a text recognition model, a logo detection model, a PS trace recognition model, and a watermark detection model.
  • the image to be processed is input into the image analysis model of each analysis dimension for analysis calculation, and the image to be processed is analyzed and analyzed in at least one analysis dimension.
  • the image analysis model for each analysis dimension By setting the image analysis model for each analysis dimension, the image to be processed can be analyzed from each analysis dimension, thereby improving the analysis efficiency and accuracy of the image to be processed.
  • the technical solution provided in this embodiment can parse the image to be processed from at least one analytic dimension for the acquired image to be processed, and at least one analytic result can be obtained.
  • a target processing algorithm can be selected for the image to be processed through at least one analytic result. The selection of the target processing algorithm depends on the analytic result of the image to be processed, and an adaptive algorithm selection of the image to be processed is realized. Through the algorithm selection, the processing efficiency and accuracy of the image can be improved.
  • the target processing algorithm can be used to perform corresponding image improvement processing on the image to be processed to obtain an improved target image.
  • This image processing method relies on the image itself to realize the selection of the target processing algorithm, improves the adaptability of the selected target processing algorithm and the image to be processed, and when the image to be processed is processed using the target processing algorithm, personalized processing of the image to be processed is realized, which can improve the processing efficiency and accuracy of the image to be processed.
  • FIG. 3 it is a flow chart of another embodiment of an image processing method provided by an embodiment of the present disclosure, which is different from the above-mentioned embodiment in that, according to at least one analysis result, selecting a target processing algorithm for the image to be processed may include:
  • the algorithm decision tree provided in this embodiment may include a data structure that uses a "tree" structure to make decisions.
  • the image processing node may be a node in the algorithm decision tree.
  • the algorithm decision tree can be used to determine the image algorithm to determine the algorithm associated with each image processing node.
  • the algorithm decision tree may include at least one image processing node, and the at least one image processing node may be connected according to the algorithm selection strategy of each node to form the algorithm decision tree.
  • Each image processing node may be associated with an algorithm selection strategy.
  • the algorithm selection strategy of each image processing node may be used for algorithm selection of the corresponding image processing node.
  • the algorithm selection strategy may include an algorithm selection strategy set for each image processing node, and the image processing algorithm of the corresponding image processing node may be obtained by executing the algorithm selection strategy.
  • the target parsing results may include multiple ones, each of which may correspond to corresponding parsing content information, and the parsing content information may include text describing the parsed content.
  • the parsing result may include, for example, a brightness value obtained by a brightness detection model.
  • the image processing node may include an image enhancement node, and the image enhancement node may include a brightness enhancement node.
  • the target analysis result corresponding to each image processing node may include a target analysis result selected according to the processing function or processing requirement of the image processing node.
  • the algorithm decision tree may include at least one image processing node, and a target processing node that meets the algorithm execution condition in the decision algorithm tree may be determined.
  • the algorithm execution condition may refer to a judgment condition for the image processing node to execute the associated algorithm. If it is determined that the image processing node meets the algorithm execution condition, the image processing node is determined to be the target processing node. If it is determined that the image processing node does not meet the algorithm execution condition, the image processing node is determined to be the target processing node.
  • the algorithm execution conditions of different image processing nodes may be different and may be determined according to the processing function of the image processing node. Taking the brightness enhancement node as an example, the algorithm execution condition may be set to a brightness value less than 30, while taking the color correction node as an example, the algorithm execution condition may be set to a color value less than 55. Therefore, for any image processing node, the corresponding algorithm execution condition may be determined according to the processing function or requirement of the image processing node.
  • the algorithm selection strategy may include an algorithm execution condition, and the algorithm execution condition may be used as a part of the algorithm selection strategy.
  • the algorithm selection strategy may also include a specific selection strategy for the algorithm.
  • the algorithm selection strategy may include two steps. The first step may be to determine whether the image processing node meets the algorithm execution condition according to the algorithm execution condition. The second step is to select the algorithm associated with the node to obtain the target processing algorithm of the node when the image processing node is a target processing node that meets the algorithm execution condition.
  • the algorithm selection when selecting an algorithm for the algorithm decision tree, can be performed according to at least one image processing node and the algorithm selection strategy managed by each image processing node.
  • the algorithm selection strategy managed by each image processing node.
  • FIG. 4 a flowchart of another embodiment of an image processing method provided by this embodiment is shown.
  • the target parsing results of each image processing node are used to determine the target processing node in the algorithm decision tree that meets the algorithm execution condition, which may include:
  • the image processing node if it is determined that the target parsing result of the image processing node meets the algorithm execution condition, the image processing node is determined to be the target processing node, and after selecting the target processing algorithm of the target processing node according to the algorithm selection strategy, the next image processing node associated with the target processing algorithm is entered to continue to determine whether the target parsing result of the image processing node meets the algorithm execution condition. In another embodiment, if it is determined that the target parsing result of the image processing node does not meet the algorithm execution condition, the next image processing node associated with the image processing node is entered to continue to determine whether the target parsing result of the image processing node meets the algorithm execution condition.
  • the algorithm execution condition may be implemented by an algorithm selection strategy to perform a conditional judgment step.
  • the algorithm selection strategy includes at least one numerical interval, and the at least one numerical interval includes a first numerical interval and a second numerical interval.
  • the target parsing result of the image processing node satisfies the algorithm execution condition, which may include that the target parsing result of the image processing node does not belong to the first numerical interval of the at least one numerical interval, or that the target parsing result of the image processing node belongs to the second numerical interval of the at least one numerical interval.
  • step 404 Enter the next image processing node associated with the target processing algorithm, and return to execute the judgment step in step 401 until the algorithm decision tree traversal is completed.
  • step 405 Enter the next image processing node associated with the image processing node, and return to execute the judgment step in step 401 until the algorithm decision tree traversal is completed.
  • the algorithm decision tree may include multiple layers, each layer may include one or more nodes, There may be connections between nodes at different layers.
  • a node may refer to an image processing node.
  • a top-down order it means starting from the first layer of the algorithm decision tree, from top to bottom, and determining whether the traversed image processing nodes meet the algorithm execution conditions.
  • the image processing node can be selected according to the algorithm selection strategy to achieve the overall decision of the algorithm decision tree.
  • the algorithm decision tree may form a tree structure for at least one image processing node according to the connection relationship corresponding to the algorithm selection strategy between the image processing nodes.
  • the target processing node when executing the selection of the target processing node, it is possible to start from the first image processing node in the algorithm decision tree in a top-down order to determine whether the target parsing result of the image processing node meets the algorithm execution condition.
  • the algorithm execution condition By determining the algorithm execution condition, the target processing algorithm that meets the algorithm execution condition can be obtained, thereby achieving efficient and accurate selection of the target processing algorithm.
  • the next image processing node can be entered to achieve smooth execution of the algorithm decision tree until the algorithm decision tree traversal is completed. Accurate selection of the target processing node is achieved by determining the sequential conditions of each image processing node in the algorithm decision tree.
  • the algorithm selection strategy includes at least one numerical interval of the image processing node, each numerical interval is associated with an image processing algorithm or a next image processing node, and the image processing algorithm of each numerical interval is connected to the next image processing node;
  • the algorithm decision tree is traversed. If the target parsing result of any image processing node does not belong to the first numerical interval of the image processing node, the image processing node is determined to be a target processing node that meets the algorithm execution condition, and the target processing node summarized by the algorithm decision tree is obtained.
  • An image processing node can be associated with one or more image processing algorithms.
  • the image processing node can classify data according to different numerical intervals.
  • the numerical interval can be set according to the processing function or requirements of the image processing node, and the data interval can be divided according to the data type corresponding to the analysis result of the image processing node. For example, taking the brightness value as an example, the higher the brightness value, the better the image quality, while the noise value For example, the higher the noise value, the worse the image quality.
  • the value range can be set according to the actual processing requirements of the image processing node.
  • the target parsing result of the image processing node belongs to the first numerical interval may include that if the value of the target parsing result is greater than the lower bound of the first numerical interval and/or less than the upper bound of the first numerical interval, it can be determined that the target parsing result belongs to the first numerical interval, otherwise, the target parsing result does not belong to the first numerical interval.
  • the lower bound and the upper bound of the first numerical interval may refer to two real numbers forming an interval range, and the lower bound real number is less than the upper bound real number.
  • the numerical interval of the algorithm selection strategy can be assumed to be infinite to infinitesimal. If the algorithm selection strategy sets the image processing algorithm, the image processing algorithm can be directly determined as the target processing algorithm of the image processing node. If the algorithm selection strategy does not set the image processing algorithm, the image processing node is skipped and the next image processing node is entered.
  • different algorithm selection strategies can be set for different image processing nodes.
  • the setting of different algorithm selection strategies can realize the rapid selection of the algorithm of the corresponding image processing node, avoid the problem of fixed algorithm selection mode due to a single algorithm selection strategy, and improve the efficiency and accuracy of algorithm selection.
  • the setting of the image processing node from multiple data stages can be realized by setting the numerical interval, thereby improving the accuracy and precision of the setting of the image processing node.
  • the target processing algorithm of the target processing node is selected according to the algorithm selection strategy, which may include:
  • the image processing algorithm associated with the target numerical interval is used as the target processing algorithm of the target processing node.
  • the algorithm selection strategy may include at least one numerical interval, the numerical interval may be associated with the image processing algorithm or may not be associated with the image processing algorithm, and the numerical interval not associated with the image processing algorithm may be connected to the next image processing node. If the numerical interval where the target parsing result of the previous image processing node is located is not associated with the image processing algorithm, it may be determined that the image processing node does not meet the algorithm selection condition.
  • a second numerical interval in at least one numerical interval in the algorithm selection strategy can be determined.
  • the second numerical interval can be a numerical interval of an associated image processing algorithm. Therefore, by matching the numerical values of the second numerical interval and the target analysis result, an accurate target processing algorithm can be obtained, thereby improving the selection efficiency and accuracy of the target processing algorithm.
  • FIG. 5 a flowchart of another embodiment of an image processing method provided in this embodiment is shown.
  • the difference from the above-mentioned embodiment is that the determination of the target processing node and the target processing algorithm of the target processing node can be obtained by the following steps:
  • the next image processing algorithms associated with different numerical intervals may be the same or different, and may be obtained according to the algorithm selection requirement.
  • 503 Determine whether the target parsing result of the image processing node belongs to the first numerical interval. If so, execute 504; otherwise, execute 505.
  • 508 Get the target processing node obtained when the algorithm decision tree traversal ends.
  • At least one numerical interval may also include other numerical intervals such as a third numerical interval.
  • the number of at least one numerical interval is not specifically limited.
  • the first numerical interval and the second numerical interval shown in this embodiment are for detailed description of the comparison between the target analysis results and the first numerical interval and the second numerical interval in this solution, and should not be regarded as a detailed limitation of the technical solution disclosed herein.
  • the tree structure of the algorithm decision tree can be used to divide the numerical interval of the image processing node starting from the first image processing node to obtain a first numerical interval directly associated with the next image processing node and a second numerical interval not directly associated with the next image processing node.
  • the selection of the target processing node and the target processing algorithm associated with the target processing node can be quickly completed, thereby improving the selection efficiency and accuracy.
  • an image processing node may be formed by a number of processing node groups, which may generally include an image enhancement node, a content extraction node, and an image restoration node.
  • the image enhancement node may include at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node.
  • the image processing node includes: an image enhancement node; the image enhancement node specifically includes: at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node;
  • the target parsing results corresponding to the resolution enhancement node include: resolution value;
  • the target analysis results corresponding to the noise reduction node include: clarity and noise scores;
  • the target parsing results corresponding to the brightness enhancement node include: brightness value;
  • the target analysis results corresponding to the color correction node include: color score.
  • the image enhancement node may include at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node, and may also include at least one of a portrait enhancement node, an image quality enhancement node, a portrait slimming node, a color enhancement node, and a noise reduction node.
  • the image enhancement node may be used to enhance the image quality and improve the subsequent image processing effect.
  • the image enhancement node including at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node
  • the image can be repaired from multiple dimensions such as resolution, noise, brightness, color, etc., thereby improving the enhancement angle and accuracy of the image.
  • the image processing node further includes: a content extraction node; and the target parsing result corresponding to the content extraction node includes: the number of portraits.
  • the content extraction node may also include: at least one of a portrait region extraction node, an object region segmentation node, and a saliency segmentation node.
  • Different extraction functions can be set through one or more content extraction nodes, and corresponding processing algorithms can be implemented using different extraction functions.
  • the image is processed using the content extraction node to obtain the corresponding processing node, and the number of portraits is analyzed through the processing node, so that the image is targetedly analyzed using the number of portraits to achieve detailed analysis of the image content.
  • the image processing node further includes: an image restoration node; the image restoration node includes: at least one of an image erasing algorithm, an image expansion algorithm, an image cropping algorithm, and a portrait slimming algorithm.
  • the image expansion algorithm may include a product restoration algorithm, a comic restoration algorithm, a landscape restoration algorithm, etc.
  • the corresponding processing algorithm may be selected according to the image processing type to implement the algorithm type selection, improve the correlation between the algorithm and the image processing type, and improve the algorithm processing efficiency.
  • any one of the image erasing algorithm, image expansion algorithm, image cropping algorithm, and image slimming algorithm is selected to achieve targeted repair of the image and improve the image repair efficiency and accuracy.
  • the image analysis model corresponding to at least one analysis dimension may include: image quality detection algorithm, brightness detection algorithm, color detection algorithm, contrast detection algorithm, aesthetic detection algorithm, noise detection algorithm, portrait recognition algorithm and other algorithms.
  • the above algorithms can detect the resolution, image quality clarity, color, brightness, contrast, aesthetic score, noise score, number of portraits and other analysis results of the image to be processed.
  • the above analysis results can be used for the selection of the target processing algorithm.
  • the algorithm decision tree may include: an image enhancement node 601 , a content extraction node 602 , and an image restoration node 603 .
  • the image enhancement node 601 may include a resolution enhancement node 6011 , a clarity enhancement node 6012 , a noise reduction node 6013 , a brightness enhancement node 6014 , and a color enhancement node 6015 .
  • the first processing node in the image adding node 601 may be a resolution enhancement node 6011.
  • the resolution enhancement node 6011 may include three numerical intervals of resolution, namely (0, 360p) (360p, 1080p) and greater than 1080p.
  • the two numerical intervals (0, 360p) (360p, 1080p) belong to the second numerical interval, (0, 360p) is associated with a 4x super-resolution algorithm, (360p, 1080p)
  • the 2x super-resolution algorithm is associated.
  • the first value interval is greater than 1080P, and is not associated with an image processing algorithm, but is directly connected to the image quality clarity enhancement node 6012. After the resolution of the image to be processed is increased by the resolution enhancement algorithm, the noise reduction node 6013 can be entered.
  • the noise reduction node 6013 may include two noise intervals, namely, two numerical intervals where the noise value is greater than 70 and one where the noise value is less than 70.
  • the second numerical interval where the noise value is greater than 70 is associated with the noise reduction algorithm, and the first numerical interval where the noise value is less than 70 is directly connected to the brightness enhancement node 6014.
  • the clarity enhancement node 6012 may include two numerical intervals greater than 60 and less than 60. If the noise score is greater than 60 in the second numerical interval, it is directly connected to the brightness enhancement node 6014. If the noise score is less than 60 in the first numerical interval, the image quality enhancement algorithm associated with the numerical interval less than 60 is used to enhance the image quality of the processed image.
  • the brightness enhancement node 6014 may include two value intervals, namely a first value interval greater than 30 and a second value interval less than 30.
  • the second value interval less than 30 is associated with a brightness enhancement algorithm, and the first value interval less than 30 is directly connected to the content extraction node 602 .
  • the second numerical interval of the noise reduction node 6013 is associated with a noise reduction algorithm, and the noise reduction algorithm can be connected to the color enhancement node 6015.
  • the color enhancement node 6015 may include three numerical intervals, namely, color values less than 25, greater than 25 and less than 55, and greater than 55.
  • a color value less than 25 is associated with the HDR color enhancement algorithm, which is the second numerical interval
  • a color value greater than 25 and less than 55 is associated with the color correction algorithm, which is the second numerical interval
  • a color value greater than 55 is directly connected to the content extraction node 502, which is its first numerical interval.
  • the parsing result corresponding to the content extraction node 602 may be the number of portraits.
  • the content extraction node 602 may include three numerical intervals, which are greater than 1 and less than or equal to 2, greater than 3, and 0. Among them, greater than 1 and less than or equal to 2 can be associated with the skin resurfacing algorithm as the second numerical interval, more than 3 can be associated with the portrait enhancement algorithm as the second numerical interval, and when it is 0, it directly enters the image restoration node 503 as the first numerical interval.
  • the image can be connected to the image restoration node 603 after the skin resurfacing algorithm or the portrait enhancement algorithm. Among them, the image restoration node 603 can be associated with the intelligent slimming algorithm according to the portrait beauty type.
  • the resolution enhancement node 6011 determines that the resolution 256 belongs to the second numerical interval of (0,360p)
  • the target processing algorithm corresponding to the resolution enhancement node 6011 is the 4x super-resolution algorithm. After that, it enters the denoising node 6013 connected to the 4x super-resolution algorithm, and the noise score 50 belongs to the small In the first numerical interval of 70, no noise reduction is required. Afterwards, enter the brightness enhancement node 6014.
  • the brightness value is 20, which does not belong to the first numerical interval, but belongs to the second numerical interval less than 30.
  • the brightness enhancement node is the target processing node, and the brightness enhancement algorithm associated with less than 30 can be applied.
  • the number of portraits is 2, then it can be determined that the content extraction node is the target processing node, and the corresponding target processing algorithm is the skin resurfacing algorithm.
  • This image restoration node 603 is directly associated with a slimming algorithm. It can be determined that the image restoration node is the target processing node, and the corresponding target processing algorithm is the slimming algorithm.
  • the resolution enhancement node 6011, the brightness enhancement node 6014, the content extraction node 602, and the image restoration node 603 can be determined as target processing nodes, and the target processing algorithms associated with each target processing algorithm can be obtained.
  • the image processing nodes can be processed in the order of their positions in the algorithm decision tree.
  • the target processing node includes at least one, and further includes:
  • image processing results including:
  • the target processing algorithm of each target processing node is sequentially executed to obtain the node processing result of each target processing node, and the node processing result output by the previous target processing node is used as the input of the next target processing node;
  • the node processing result corresponding to the last target processing node is obtained as the image processing result.
  • the position of the target processing node in the algorithm decision tree may refer to the number of layers of the node in the algorithm decision tree and the order of the node in the same layer.
  • the target processing algorithm of each target processing node is executed respectively, so that at least one target processing node is executed in sequence, thereby achieving the smooth execution of the target processing algorithm of each target processing node, effectively improving the execution efficiency and execution reliability of at least one target processing node, and improving the image processing efficiency.
  • a target parsing result is selected for each image processing node, and the target parsing result corresponding to each image processing node is obtained, including:
  • Parsing content information corresponding to the processing requirement information of each image processing node and at least one parsing result
  • the analysis result corresponding to the analysis content information having the highest similarity to the processing requirement information of each image processing node is used as the target analysis result of each image processing node.
  • the processing requirement information of the image processing node can be matched with the analysis content information of at least one analysis result to obtain the matching degree of the image processing node corresponding to the at least one analysis result, so as to take the analysis result with the highest matching degree as the target analysis result of the image processing node.
  • the information matching between the processing requirement information and the parsed content information may include performing word segmentation matching between the processing requirement information and the parsed content information to obtain the similarity between the processing requirement information and the parsed content information. That is, the similarity may refer to the number or proportion of words with the same or similar meanings.
  • information matching when selecting parsing content information for an image processing node, information matching can be performed based on the processing requirement information of the image processing node and the parsing content information corresponding to at least one parsing result.
  • the adaptation of each image processing node to different parsing dimensions can be achieved, and the target parsing result of each image processing node can be accurately obtained, thereby improving the accuracy of algorithm selection of each processing node.
  • the method further includes:
  • the corresponding image improvement processing is performed on the image to be processed to obtain the improved target image, including:
  • corresponding image improvement processing is performed on the image to be processed based on the target processing algorithm to obtain an improved target image.
  • the algorithm container may refer to an independent algorithm running space established for the image to be processed.
  • the algorithm container may implement independent running of the algorithm.
  • the algorithm container may be a CUDA (Compute Unified Device Architecture) context created for the image to be processed.
  • the step of dispatching the target processing algorithm from the image processing algorithm library to the algorithm container may include: dispatching the target processing algorithm from the image processing algorithm library to the algorithm container through an algorithm dispatcher.
  • the image processing algorithm library can store the image processing algorithms obtained through training. Scheduling the target processing algorithm to the algorithm container can refer to scheduling the parameters of the target processing algorithm from the image processing algorithm library to the algorithm container, so as to realize the independent operation of the target processing algorithm by the algorithm container, so that the image processing process of the image to be processed is independent of other algorithm containers, thereby improving the efficiency and security of the algorithm operation.
  • the target processing algorithm after determining the target processing algorithm, can be scheduled to the algorithm container.
  • the execution process of the target processing algorithm can be made independent of other containers, thereby improving the execution efficiency and security of the algorithm.
  • the image processing device 700 may include the following units:
  • the image acquisition unit 701 is used to obtain an image to be processed in response to an image processing request.
  • the image parsing unit 702 is used to parse the image to be processed from at least one parsing dimension to obtain at least one parsing result of the image to be processed.
  • the algorithm selection unit 703 is used to select a target processing algorithm for the image to be processed according to at least one analysis result.
  • Image processing unit 704 used to perform corresponding image processing on the image to be processed based on the target processing algorithm to obtain an image processing result.
  • the image parsing unit includes:
  • a model acquisition module is used to acquire an image analysis model that matches each analysis dimension
  • the image analysis module is used to input the image to be processed into the image analysis model of each analysis dimension for analysis calculation, obtain the analysis results corresponding to the image to be processed in at least one analysis dimension, and obtain at least one analysis result of the image to be processed.
  • the algorithm selection unit includes:
  • a first determination module is used to determine at least one image processing node in an algorithm decision tree, where the algorithm decision tree is connected by at least one image processing node according to an algorithm selection strategy of each node;
  • a result selection module used to select a target parsing result for each image processing node from at least one parsing result, and obtain the target parsing result corresponding to each image processing node;
  • a result acquisition module is used to determine the target processing node that meets the algorithm execution conditions in the algorithm decision tree by using the target parsing results of each image processing node;
  • the algorithm selection module is used to select a target processing algorithm of a target processing node according to an algorithm selection strategy to obtain a target processing algorithm for an image to be processed.
  • the result acquisition module includes:
  • the condition judgment submodule is used to judge whether the target parsing result of the image processing node meets the algorithm execution condition in a top-down order, starting from the first image processing node of the algorithm decision tree;
  • the first processing submodule is used to determine the image processing node as a target processing node if it is determined that the target parsing result of the image processing node meets the algorithm execution condition, and after selecting the target processing algorithm of the target processing node according to the algorithm selection strategy, enter the next image processing node associated with the target processing algorithm, and continue to determine whether the target parsing result of the image processing node meets the algorithm execution condition;
  • the second processing submodule is used to enter the next image processing node associated with the image processing node and continue to determine whether the target parsing result of the image processing node meets the algorithm execution condition if it is determined that the target parsing result of the image processing node does not meet the algorithm execution condition;
  • the target acquisition submodule is used to obtain the target processing node obtained at the end of the algorithm decision tree traversal.
  • the algorithm selection strategy includes at least one numerical interval of the image processing node, each numerical interval is associated with an image processing algorithm or a next image processing node, and the image processing algorithm of each numerical interval is associated with the next image processing node;
  • the result acquisition module includes:
  • a first determination submodule used to determine at least one numerical interval corresponding to the algorithm selection strategy of each image processing node
  • An interval acquisition submodule adapted to determine a first numerical interval directly associated with a next image processing node from at least one numerical interval corresponding to each image processing node;
  • the node traversal submodule is used to traverse the algorithm decision tree. If the target parsing result of any image processing node does not belong to the first numerical interval of the image processing node, the image processing node is determined to be the target processing node that meets the algorithm execution conditions, and the target processing node summarized by the algorithm decision tree is obtained.
  • the algorithm selection module includes:
  • a second determination submodule used to determine a second numerical interval other than the first numerical interval in at least one numerical interval corresponding to the algorithm selection strategy
  • the third determination submodule is used to determine the target processing node based on the target parsing result from the second value Determine the target numerical value interval to which the target parsing result belongs in the interval;
  • the algorithm association submodule is used to use the image processing algorithm associated with the target numerical interval as the target processing algorithm of the target processing node.
  • the image processing node includes: an image enhancement node; the image enhancement node specifically includes: at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node;
  • the target parsing results corresponding to the resolution enhancement node include: resolution value;
  • the target analysis results corresponding to the noise reduction node include: clarity and noise scores;
  • the target parsing results corresponding to the brightness enhancement node include: brightness value;
  • the target analysis results corresponding to the color correction node include: color score.
  • the image processing node further includes: a content extraction node; and the target parsing result corresponding to the content extraction node includes: the number of portraits.
  • the image processing node further includes: an image restoration node; the image restoration node includes: at least one of an image erasing algorithm, an image expansion algorithm, an image cropping algorithm, and a portrait slimming algorithm.
  • the target processing node includes at least one, and further includes:
  • a sequence determination unit used to determine the processing sequence corresponding to at least one target processing node based on the position of at least one target processing node in the algorithm decision tree;
  • Image processing unit comprising:
  • a sequential execution module used to sequentially execute the target processing algorithms of each target processing node according to the processing sequence corresponding to at least one target processing node, obtain the node processing results of each target processing node, and use the node processing result output by the previous target processing node as the input of the next target processing node;
  • the result acquisition module is used to obtain the node processing result corresponding to the last target processing node as the image processing result.
  • the result selection module includes:
  • a content acquisition submodule used for analyzing content information corresponding to the processing requirement information of each image processing node and at least one analysis result
  • the information matching submodule is used to match the analysis content information corresponding to the analysis content information with the highest similarity to the processing requirement information of each image processing node according to the analysis content information corresponding to at least one analysis result.
  • the result is used as the target analysis result of each image processing node.
  • it also includes:
  • An algorithm scheduling unit used for scheduling a target processing algorithm from an image processing algorithm library to an algorithm container
  • Image processing unit including:
  • the container execution module is used to perform corresponding image improvement processing on the image to be processed based on the target processing algorithm in the algorithm container to obtain an improved target image.
  • the device provided in this embodiment can be used to execute the technical solution of the above method embodiment. Its implementation principle and technical effect are similar, and this embodiment will not be repeated here.
  • the embodiment of the present disclosure also provides an electronic device.
  • FIG8 it shows a schematic diagram of the structure of an electronic device 800 suitable for implementing the embodiment of the present disclosure
  • the electronic device 800 may be a terminal device or a server.
  • the terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (Portable Android Devices, PADs), portable multimedia players (PMPs), vehicle terminals (such as vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • PDAs personal digital assistants
  • PADs Portable Android Devices
  • PMPs portable multimedia players
  • vehicle terminals such as vehicle navigation terminals
  • fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG8 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.
  • the electronic device 800 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 801, which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 802 or a program loaded from a storage device 808 to a random access memory (RAM) 803.
  • a processing device 801 e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • Various programs and data required for the operation of the electronic device 800 are also stored in the RAM 803.
  • the processing device 801, the ROM 802, and the RAM 803 are connected to each other via a bus 804.
  • An input/output (I/O) interface 805 is also connected to the bus 804.
  • the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 807 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 808 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 809.
  • the communication device 809 may allow the electronic device 800 to communicate with other devices wirelessly or by wire to exchange information.
  • FIG8 shows an electronic device 800 having various devices, it should be understood that it is not required to implement or possess all of the devices shown. More or fewer devices may be implemented or possessed instead.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program includes a program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication device 809, or installed from a storage device 808, or installed from a ROM 802.
  • the processing device 801 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
  • This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above.
  • the computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the computer readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device executes the method shown in the above embodiment.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the nodes involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the name of the node does not limit the node itself in some cases.
  • the first acquisition node may also be described as a "node that acquires at least two Internet Protocol addresses.”
  • exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chip (SOCs), complex programmable logic devices (CPLDs), and the like.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOCs systems on chip
  • CPLDs complex programmable logic devices
  • a machine-readable medium may be a tangible medium that may contain or store information for use by or in conjunction with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or apparatuses, or any suitable combination of the foregoing.
  • machine-readable storage media may include electrical connections based on one or more wires, portable computer disks, hard disks, random access memories (RAM), read-only memories (ROM), erasable programmable read-only memories (EPROM or flash memory), optical fibers, portable compact disk read-only memories (CD-ROMs), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • RAM random access memories
  • ROM read-only memories
  • EPROM or flash memory erasable programmable read-only memories
  • CD-ROMs portable compact disk read-only memories
  • magnetic storage devices or any suitable combination of the foregoing.
  • an image processing method comprising:
  • parsing the image to be processed from at least one parsing dimension to obtain at least one parsing result of the image to be processed includes:
  • the image to be processed is respectively input into the image analysis model of each analysis dimension for analysis calculation, and the analysis results corresponding to the image to be processed in at least one analysis dimension are obtained, thereby obtaining at least one analysis result of the image to be processed.
  • selecting a target processing algorithm for an image to be processed according to at least one parsing result includes:
  • the target processing algorithm of the target processing node is selected to obtain the graph to be processed.
  • Image processing algorithm According to the algorithm selection strategy, the target processing algorithm of the target processing node is selected to obtain the graph to be processed.
  • determining the target processing node that meets the algorithm execution condition in the algorithm decision tree includes:
  • the image processing node is determined as the target processing node, and after selecting the target processing algorithm of the target processing node according to the algorithm selection strategy, the next image processing node associated with the target processing algorithm is entered to continue to determine whether the target parsing result of the image processing node meets the algorithm execution condition;
  • the algorithm selection strategy includes at least one numerical interval of an image processing node, each numerical interval is associated with an image processing algorithm or a next image processing node, and the image processing algorithm of each numerical interval is associated with the next image processing node;
  • the algorithm decision tree is traversed. If the target parsing result of any image processing node does not belong to the first numerical interval of the image processing node, the image processing node is determined to be a target processing node that meets the algorithm execution condition, and the target processing node summarized by the algorithm decision tree is obtained.
  • selecting a target processing algorithm of a target processing node according to an algorithm selection strategy includes:
  • the image processing algorithm associated with the target numerical interval is used as the target processing algorithm of the target processing node.
  • the image processing node includes: an image enhancement node; the image enhancement node specifically includes: at least one of a resolution enhancement node, a noise reduction node, a brightness enhancement node, and a color correction node;
  • the target parsing results corresponding to the resolution enhancement node include: resolution value;
  • the target analysis results corresponding to the noise reduction node include: clarity and noise scores;
  • the target parsing results corresponding to the brightness enhancement node include: brightness value;
  • the target analysis results corresponding to the color correction node include: color score.
  • the image processing node further includes: a content extraction node; and the target parsing result corresponding to the content extraction node includes: the number of portraits.
  • the image processing node further includes: an image restoration node; the image restoration node includes: at least one of an image erasing algorithm, an image expansion algorithm, an image cropping algorithm, and a portrait slimming algorithm.
  • the target processing node includes at least one, further comprising:
  • image processing results including:
  • the target processing algorithm of each target processing node is sequentially executed to obtain the node processing result of each target processing node, and the node processing result output by the previous target processing node is used as the input of the next target processing node;
  • the node processing result corresponding to the last target processing node is obtained as the image processing result.
  • selecting a target parsing result for each image processing node from at least one parsing result to obtain the target parsing result corresponding to each image processing node includes:
  • Parsing content information corresponding to the processing requirement information of each image processing node and at least one parsing result
  • the analysis result corresponding to the analysis content information having the highest similarity to the processing requirement information of each image processing node is used as the target analysis result of each image processing node.
  • the target processing algorithm After selecting the target processing algorithm, it also includes:
  • the corresponding image improvement processing is performed on the image to be processed to obtain the improved target image, including:
  • corresponding image improvement processing is performed on the image to be processed based on the target processing algorithm to obtain an improved target image.
  • an image processing apparatus comprising:
  • An image acquisition unit configured to obtain an image to be processed in response to an image processing request
  • An image analysis unit used to analyze the image to be processed from at least one analysis dimension to obtain at least one analysis result of the image to be processed;
  • an algorithm selection unit configured to select a target processing algorithm for the image to be processed according to at least one analysis result
  • the image processing unit is used to perform corresponding image processing on the image to be processed based on the target processing algorithm to obtain an image processing result.
  • an electronic device comprising: at least one processor and a memory;
  • Memory stores computer-executable instructions
  • At least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the image processing method of the first aspect and various possible designs of the first aspect as described above.
  • a computer-readable storage medium in which computer execution instructions are stored.
  • a processor executes the computer execution instructions, the image processing method as described in the first aspect and various possible designs of the first aspect are implemented.
  • a computer program product including a computer program, which, when executed by a processor, implements the image processing method of the first aspect and various possible designs of the first aspect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本公开实施例提供一种图像处理方法、装置、设备、介质及产品。该方法包括:响应于图像处理请求,获得待处理图像;对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像的至少一个解析结果;根据所述至少一个解析结果,为所述待处理图像选择目标处理算法;基于所述目标处理算法对所述待处理图像执行相应的图像改进处理,获得改进后的目标图像。

Description

图像处理方法、装置、设备、介质及产品
本申请要求2022年11月02日递交的申请号为CN202211367028.X、标题为“图像处理方法、装置、设备、介质及产品”的中国发明专利申请的优先权,该中国专利申请的全部内容通过引用并入本文中。
技术领域
本公开实施例涉及计算机技术领域,尤其涉及一种图像处理方法、装置、设备、介质及产品。
背景技术
目前,计算机视觉领域中,可以设置图像的自动处理链路,以对图像进行处理应用。例如,用户可以上传一个图像,将用户上传的图像经过图像处理链路的处理,获得最终的图像处理结果,并将图像处理结果为用户展示。例如,用户可以上传一个风景图像,从风景图像中经过人脸区域识别、人脸区域的图像裁剪、人脸区域图像的身份识别等图像处理链路,获得最终的身份识别结果。
由以上描述可知,在图像的处理链路中可以包括多种图像处理算法,以实现图像的最终处理。但是,所有图像均使用该图像处理链路时,图像处理准确度并不高。
发明内容
本公开实施例提供一种图像处理方法、装置、设备、介质及产品,以克服所有图像均使用该图像处理链路时,图像处理准确度并不高的问题。
第一方面,本公开实施例提供一种图像处理方法,包括:
响应于图像处理请求,获得待处理图像;
对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像 的至少一个解析结果;
根据所述至少一个解析结果,为所述待处理图像选择目标处理算法;
基于所述目标处理算法对所述待处理图像执行相应的图像改进处理,获得改进后的目标图像。
第二方面,本公开实施例提供一种图像处理装置,包括:
图像获取单元,用于响应于图像处理请求,获得待处理的待处理图像;
图像解析单元,用于对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像的至少一个解析结果;
算法选择单元,用于根据所述至少一个解析结果,为所述待处理图像选择目标处理算法;
图像处理单元,用于基于所述目标处理算法对所述待处理图像执行相应的图像处理,获得图像处理结果。
第三方面,本公开实施例提供一种电子设备,包括:处理器以及存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种图像处理方法的应用示例图;
图2为本公开实施例提供的一种图像处理方法的一个流程示意图;
图3为本公开实施例提供的一种图像处理方法的又一个流程图;
图4为本公开实施例提供的一种图像处理方法的又一个实施例;
图5为本公开实施例提供的一种图像处理方法的又一个流程示意图;
图6为本公开实施例提供的一个算法决策树的示例图;
图7为本公开实施例提供的一种图像处理装置的一个结构示意图;以及
图8为本公开实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的技术方案可以应用于图像实时应用处理场景中,通过将实时获取的待处理图像从各解析维度进行解析,并将解析获得的结果用于图像的处理算法的选择,实现为图像自动选择处理算法,提高图像算法选择的准确性和精度。
目前,图像的应用方式较为多样化,例如车辆追踪、图像美颜、人脸识别等诸多应用领域。任意一种图像处理中可能会涉及到多种图像处理算法,例如图像去噪、区域识别、图像增强等一系列算法形成的处理链路。图像处理的过程,一般可以包括:用户可以上传一个图像,将用户上传的图像经过图像处理链路的处理,获得最终的图像处理结果,并将图像处理结果为用户展示。例如,用户可以上传一个风景图像,从风景图像中经过人脸区域识别、人脸区域的图像裁剪、人脸区域图像的身份识别等图像处理链路,获得最终的身份识别结果。在图像的处理链路中可以包括多种图像处理算法,且通常各处理链路的算法是固定的,训练过程中也是使用与设计背景相匹配的图像训练获得,并不能适用于其他链路。因此,若任意图像均使用被输入一图像处理链路时,若图像的背景与图像处理链路的背景不同,可能会导致图像处理识别,图像处理准确度并不高。
为了解决上述技术问题,本公开技术方案中,采用对图像分析的方式,获得图像在不同维度,例如清晰度、背景、亮度等维度的分析结果。通过各维度的分析结果,为图像进行自适应性地选择算法,而非采用固定的图像处理链路的方式对采集到的图像进行处理,使得图像的应用范围更广泛,处理效率更高。
本公开实施例中,针对获取的待处理图像,可以对待处理图像从至少一个解析维度进行解析,可以获得至少一个解析结果。通过至少一个解析结果可以为待处理图像选择目标处理算法。目标处理算法的选择依赖于待处理图像的解析结果,实现对待处理图像的自适应的算法选择,通过算法选择,可以提高图像的处理效率和准确性。在获得目标处理算法之后,可以利用目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。这一图像处理方式依赖于图像自身实现目标处理算法的选择,提高选择的目标处理算法和待处理图像的适配度,在利用目标处理算法对待处理图像进行处理时,实现对待处理图像的个性化处理,可以提高待处理图像的处理效率和准确性。
下面将以具体实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面几个具体实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图对本发明的实施例进行详细描述。
图1是根据本公开图像处理方法的一个应用网络架构图。根据本公开实施例的应用网络架构中可以包括一个电子设备以及一个与该电子设备通过局域网或者广域网进行网络连接的客户端,假设该电子设备可以为个人计算机、普通服务器,超级个人计算机,云服务器等类型的服务器,本公开中对电子设备的具体类型并不作出过多限定。客户端例如可以为手机、平板电脑、个人计算机、智能家电、可穿戴设备等终端设备,本公中对客户端的具体类型并不作出过多限定。
如图1所示,以电子设备为云服务器1,客户端2可以为手机21、平板电脑22等设备中的一个或多个,任一个客户端2可以向云服务器1发起图像处理请求,提供待处理图像至服务器1。云服务器1中可以响应于图像处理 请求,获得待处理图像,并根据待处理图像的维度分析结果,对待处理图像的处理算法进行选择,获得目标处理算法,目标处理算法即可以用于对待处理图像的改进处理,获得改进后的目标图像。通过目标图像的分析结果进行算法选择,即可以实现图像的自适应选择,提高图像的选择效率和准确度。
参考图2,图1为本公开实施例提供的一种图像处理方法的一个实施例的流程图,该图像处理方法可以包括以下几个步骤:
201:响应于图像处理请求,获得待处理图像。
可选地,图像处理请求可以为用户发起。在实际应用中,为了提高图像处理效率,可以提供图像处理页面,图像处理页面可以包括图像上传控件或者视频上传控件。电子设备可以检测用户触发图像上传控件或者视频上传控件可以确定用户发起图像处理请求,并获取用户上传的图像或视频,以从用户上传的图像或视频中获取待处理图像。
示例性地,待处理图像可以包括:单个人物图像、多个人物的图像、不同类型的商品类图像、自然场景下拍摄的图像等。由于待处理图像所属类别较多,不同类别的图像处理目的以及角度不同,相关技术中直接将图像输入到已设置的图像处理模型中,可能会导致图像的处理精度不高。因此,本实施例中采用自适应性的方式,对待处理图像进行图像分析,以利用图像分析结果为图像获取相应的图像处理算法。
待处理图像的图像类型未知。待处理图像可以属于多个图像类型中的任一种图像类型。图像类型可以指按照图像中的人物数量设置的数量类型、图像中包含的物体对应的图像类型、图像背景的分类等类型中的任意一种,例如车辆背景、建筑物背景、自然风光背景、纯色背景等多种类型。本实施例中,并不对图像所属类型进行过多限定。不同图像处理类型的处理算法可以不同。
202:对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果。
为了解决不同图像类型的待处理图像的算法获取问题,可以对待处理图像在不同解析维度进行解析。
解析维度可以指对待处理图像进行图像分析的角度,也即,通过解析维度可以定义图像分析的内容和对分析结果的需求。
解析结果可以为对待处理图像在某个解析维度进行解析获得的结果。例如解析维度为亮度维度时,解析结果可以为解析获得的图像亮度值。
解析维度可以包括一个或多个,可以根据实际的使用需求设置。例如,对于简单的图像处理场景,例如人脸识别场景,可以将人像数量和亮度两个角度作为解析维度。而对于图像美化场景,可以将亮度、分辨率、噪声等角度作为解析维度。
203:根据至少一个解析结果,为待处理图像选择目标处理算法。
可选地,可以从至少一个解析维度分别对待处理图像进行解析,获得各解析维度的解析结果。至少一个解析结果可以包括至少一个解析维度分别对应的解析结果。为了对解析结果进行有效利用,至少一个解析结果也可以为至少一个解析维度分别对应的解析结果中的有效解析结果。有效解析结果可以为对相应的解析维度的解析结果进行有效性判断,获得的有效解析结果。
目标处理算法可以为对待处理图像进行图像处理的图像处理算法。图像处理算法可以包括至少一个。
目标处理算法可以利用至少一个解析结果,进行结果分析和算法选择获得。通过至少一个解析结果,可以对待处理图像所需要的目标处理算法准确获取,实现待处理图像的解析结果的高效利用。
204:基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
目标处理算法可以包括至少一个,可以利用目标处理算法整体对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
目标图像可以为目标处理算法中最后一个处理算法对输入的数据进行处理获得的输出数据。目标图像可以作为输入数据输入到目标处理算法中的第一个处理算法。各处理算法顺序执行,前一个处理算法的输出数据可以作为后一个处理算法的输入数据。
本公开实施例中,对待处理图像从至少一个解析维度进行解析,可以获得至少一个解析结果。通过至少一个解析结果可以为待处理图像选择目标处理算法。目标处理算法的选择依赖于待处理图像的解析结果,实现对待处理图像的自适应的算法选择,通过算法选择,可以提高图像的处理效率和准确性。在获得目标处理算法之后,可以利用目标处理算法对待处理图像执行相 应的图像改进处理,获得改进后的目标图像。
在实际应用中,可以设置图像的解析模型,通过各种解析模型对待处理图像从不同维度进行解析,获得各维度的解析结果。作为一个实施例,对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果,可以包括:
获取与各解析维度相匹配的图像解析模型;
将待处理图像分别输入各解析维度的图像解析模型进行解析计算,获得待处理图像在至少一个解析维度分别对应的解析结果,得到待处理图像的至少一个解析结果。
可选地,图像解析模型可以包括:图像质量解析模型、图像元信息解析模型、物体识别模型、场景判断模型、版权辨别模型等模型中的至少一个。
其中,图像质量解析模型可以用于对图像的质量相关的参数进行检测。图像质量解析模型可以包括:VQScore(Video Quality Score,画质评估)模型、噪声检测模型、亮度检测模型、色彩检测模型、对比度检测模型、美学检测模型中的至少一个。
图像元信息可以指图像的生成过程中所涉及到的属性信息。图像元信息解析模型可以用于解析图像的某种元信息。图像元信息解析模型可以包括:分辨率检测模型、格式获取模型、编码质量解析模型、透明度检测模型、主题色分析模型中的至少一个。
物体识别模型可以为对图像中的物体所属种类进行识别的模型。物体识别模型可以包括:人像识别模型、车辆识别模型、产品识别模型、车牌识别模型中的至少一个。
场景判断模型可以用于对图像的获取场景进行检测的模型。场景判断模型可以包括:风景检测模型、合成图检测模型、漫画图检测模型。
版权辨别模型可以用于检测图像的版权信息,将版权信息用于后续的算法选择,例如,若检测到图像存在水印,可以将水印去除模型作为图像处理算法。版权辨别模型可以包括:文字识别模型、标识检测模型、PS痕迹识别模型、水印检测模型中的至少一个。
本公开实施例中,通过为各解析维度设置相匹配的图像解析模型,可以 将待处理图像输入各解析维度的图像解析模型进行解析计算,将待处理图像在至少一个解析维度分析进行解析。通过为各解析维度设置图像解析模型的方式,可以实现对待处理图像从各解析维度进行解析,提高待处理图像的解析效率和准确性。
本实施例提供的技术方案,针对获取的待处理图像,可以对待处理图像从至少一个解析维度进行解析,可以获得至少一个解析结果。通过至少一个解析结果可以为待处理图像选择目标处理算法。目标处理算法的选择依赖于待处理图像的解析结果,实现对待处理图像的自适应的算法选择,通过算法选择,可以提高图像的处理效率和准确性。在获得目标处理算法之后,可以利用目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。这一图像处理方式依赖于图像自身实现目标处理算法的选择,提高选择的目标处理算法和待处理图像的适配度,在利用目标处理算法对待处理图像进行处理时,实现对待处理图像的个性化处理,可以提高待处理图像的处理效率和准确性。
如图3所示,为本公开实施例提供的一种图像处理方法的又一个实施例的流程图,与前述实施例的不同之处在于,根据至少一个解析结果,为待处理图像选择目标处理算法,可以包括:
301:确定算法决策树中的至少一个图像处理节点,算法决策树由至少一个图像处理节点按照各节点的算法选择策略建立连接。
可选地,本实施例中提供算法决策树可以包括利用“树”结构来进行决策的数据结构,图像处理节点可以为算法决策树中的节点,算法决策树可以用于图像算法的判定,以确定各图像处理节点关联的算法。
算法决策树可以包括至少一个图像处理节点,至少一个图像处理节点可以按照各节点的算法选择策略连接形成算法决策树。每个图像处理节点可以关联算法选择策略。各图像处理节点的算法选择策略可以用于相应图像处理节点的算法选择。
算法选择策略可以包括为各图像处理节点设置的算法选择策略,通过执行算法选择策略可以获得相应图像处理节点的图像处理算法。
302:从至少一个解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果。
目标解析结果可以包括多个,每个目标解析结果可以对应相应的解析内容信息,解析内容信息可以包括所解析的内容进行描述的文字。解析结果例如可以包括通过亮度检测模型检测获得的亮度值。图像处理节点中可以包括图像增强节点,图像增强节点可以包括亮度增强节点。
各图像处理节点对应的目标解析结果可以包括按照图像处理节点的处理功能或者处理需求所选择的目标解析结果。
303:利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点。
算法决策树可以包括至少一个图像处理节点,可以确定决策算法树中满足算法执行条件的目标处理节点。算法执行条件可以指图像处理节点执行关联算法的判断条件,若确定图像处理节点满足算法执行条件,则确定该图像处理节点为目标处理节点,若确定图像处理节点不满足算法执行条件,则确定该图像处理节点为目标处理节点。
不同图像处理节点的算法执行条件可以不同,可以根据图像处理节点的处理功能确定。以亮度增强节点为例,可以设置算法执行条件为亮度值小于30,而以色彩校正节点为例,可以设置算法执行条件为色彩值小于55。因此,对于任意图像处理节点而言,可以根据图像处理节点的处理功能或者需求确定相应的算法执行条件。
304:按照算法选择策略选择目标处理节点的目标处理算法,以获得待处理图像的目标处理算法。
算法选择策略可以包括算法执行条件,算法执行条件可以作为算法选择策略的部分策略,除算法执行条件之外,算法选择策略还可以包括算法的具体选择策略。算法选择策略可以包括两个步骤,第一个步骤可以为根据算法执行条件判断图像处理节点是否满足算法执行条件,第二个步骤是在图像处理节点为满足算法执行条件的目标处理节点的情况下,可以对节点关联的算法进行选择,获得节点的目标处理算法。
本实施例中,在为算法决策树进行算法选择时,可以根据至少一个图像处理节点以及各图像处理节点分别管理的算法选择策略进行算法选择,通过 为各图像处理节点设置图像处理算法的方式,可以对各图像处理节点进行针对性的算法选择,使得各图像处理节点的算法选择针对性更强,准确性更高。
为了便于理解,如图4所示,为本实施例提供的一种图像处理方法的又一个实施例的流程图,与前述实施例的不同之处在于,利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点,可以包括:
401:按照自顶向下的顺序,从算法决策树的第一个图像处理节点开始,判断图像处理节点的目标解析结果是否满足算法执行条件;若是,则执行402,若否则执行405。
示例性地,在一个实施例中,若确定图像处理节点的目标解析结果满足算法执行条件,则确定图像处理节点为目标处理节点,并在按照算法选择策略选择目标处理节点的目标处理算法之后,进入目标处理算法关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件。在又一个实施例中,若确定图像处理节点的目标解析结果不满足算法执行条件,则进入图像处理节点关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件。
可选地,算法执行条件可以通过算法选择策略来执行条件判断步骤。算法选择策略包括至少一个数值区间,至少一个数值区间包括第一数值区间和第二数值区间。图像处理节点的目标解析结果满足算法执行条件可以包括图像处理节点的目标解析结果不属于至少一个数值区间中的第一数值区间,或者图像处理节点的目标解析结果属于至少一个数值区间中的第二数值区间。
402:确定图像处理节点为目标处理节点。
403:按照算法选择策略选择目标处理节点的目标处理算法。
404:进入目标处理算法关联的下一个图像处理节点,返回执行步骤401中的判断步骤,直至算法决策树遍历结束。
405:进入图像处理节点关联的下一个图像处理节点,返回执行步骤401中的判断步骤,直至算法决策树遍历结束。
406:获取算法决策树遍历结束时得到的目标处理节点。
可选地,算法决策树可以包括多个层,每层可以包括一个或多个节点, 不同层节点之间可以存在连接关系。节点可以指图像处理节点。
按照自顶向下的顺序可以指从算法决策树的第一层开始,从上到下,依次确定被遍历到的图像处理节点是否满足算法执行条件。在实际应用中,遍历算法决策树过程中并非每个图像处理节点均被遍历到,可以按照算法选择策略进行图像处理节点的选择,以实现算法决策树的整体决策。
算法决策树可以为至少一个图像处理节点按照各图像处理节点之间的算法选择策略所对应的连接关系形成树结构。
本实施例中,在执行目标处理节点的选择时,可以按照自顶向下的顺序,从算法决策树中的第一个图像处理节点开始,判断图像处理节点的目标解析结果是否满足算法执行条件,通过算法执行条件的判断,可以获得满足算法执行条件的目标处理算法,实现高效而准确地选择目标处理算法。在判断算法执行条的基础上,可以进入到下一个图像处理节点,实现算法决策树的顺利执行,直至算法决策树遍历结束。通过算法决策树中各图像处理节点的顺序条件判断,实现目标处理节点的准确选择。
作为一种可能的实施方式,算法选择策略包括图像处理节点的至少一个数值区间,各数值区间关联图像处理算法或下一个图像处理节点,且各数值区间的图像处理算法连接下一个图像处理节点;
利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点,包括:
确定各图像处理节点的算法选择策略对应的至少一个数值区间;
从各图像处理节点对应的至少一个数值区间中确定直接关联下一个图像处理节点的第一数值区间;
遍历算法决策树,若任一个图像处理节点的目标解析结果不属于图像处理节点的第一数值区间,则确定图像处理节点为满足算法执行条件的目标处理节点,获得算法决策树汇总的目标处理节点。
图像处理节点可以关联一个或多个图像处理算法,图像处理节点可以按照不同数值区间进行数据分类。数值区间可以根据图像处理节点的处理功能或者需求设置,可以按照图像处理节点的解析结果对应的数据类型进行数据区间的划分。例如,以亮度值为例,亮度值越高,图像质量越好,而以噪声值 为例时,噪声值越高,图像质量越差。数值区间可以根据图像处理节点的实际处理需求设置获得。
可选地,图像处理节点的目标解析结果属于第一数值区间可以包括若目标解析结果的数值大于第一数值区间的下界和/或小于第一数值区间的上界,则可以确定目标解析结果属于第一数值区间,否则,目标解析结果不属于第一数值区间。第一数值区间的下界和上界可以指形成区间范围的两个实数,下界实数小于上界实数。
可选地,若算法选择策略中仅未设置数值区间,则可以默认算法选择策略的数值区间为无穷大到无穷小。若算法选择策略设置图像处理算法,则直接确定图像处理算法为该图像处理节点的目标处理算法即可。若算法选择策略未设置图像处理算法,则跳过该图像处理节点,进入到下一个图像处理节点。
本公开实施例中,针对不同的图像处理节点可以设置不同的算法选择策略,不同算法选择策略的设置可以实现相应的图像处理节点的算法的快速选定,避免因算法选择策略单一导致算法选择模式固定的问题,提高算法选择效率和准确度。同时,通过数值区间的设置可以实现对图像处理节点从多个数据阶段的设置,提高图像处理节点的设置准确度和精度。
在具体的实施过程中,按照算法选择策略选择目标处理节点的目标处理算法,可以包括:
确定算法选择策略对应至少一个数值区间中除第一数值区间之外的第二数值区间;
基于目标处理节点的目标解析结果,从第二数值区间中确定目标解析结果所属的目标数值区间;
将目标数值区间关联的图像处理算法作为目标处理节点的目标处理算法。
可选地,算法选择策略可以包括至少一个数值区间,数值区间可以关联图像处理算法或者不关联图像处理算法,未关联图像处理算法的数值区间可以连接下一个图像处理节点。若前一个图像处理节点的目标解析结果所在的数值区间未关联图像处理算法,可以确定该图像处理节点不满足算法选择条件。
本实施例中,可以确定算法选择策略中的至少一个数值区间中的第二数值区间,第二数值区间可以为关联图像处理算法的数值区间,因此,利用第二数值区间和目标解析结果进行数值的匹配,可以获得准确的目标处理算法,提高目标处理算法的选择效率和准确度。
为了便于理解,如图5所示,为本实施例提供的一种图像处理方法的又一个实施例的流程图,与前述实施例的不同之处在于,目标处理节点以及目标处理节点的目标处理算法的确定,可以通过以下步骤获得:
501:按照自顶向下的顺序,从算法决策树的第一个图像处理节点开始,获取各图像处理节点的算法选择策略对应至少一个数值区间。
502:确定至少一个数值区间中的第一数值区间和第二数值区间。第一数值区间关联下一个图像处理节点,第二数值区间关联图像处理算法,图像处理算法关联下一个图像处理节点。不同数值区间关联的下一个图像处理算法可以相同可以不同,可以根据算法选择需求设置获得。
503:判断图像处理节点的目标解析结果是否属于第一数值区间,若是,则执行504,若否则执行505。
504:确定该图像处理节点不为目标处理节点,切换至该图像处理节点在第一数值区间关联的下一个图像处理节点,返回执行501。
505:确定该图像处理节点为目标处理节点;
506:基于目标处理节点的目标解析结果,从第二数值区间中确定图像处理节点所处的目标数值区间。
507:将目标数值区间关联的图像处理算法作为目标处理节点的目标处理算法,切换至目标处理算法关联的下一个图像处理节点,返回执行501。
508:获取算法决策树遍历结束时得到的目标处理节点。
当然,在实际应用中,至少一个数值区间还可以包括第三数值区间等其它数值区间,本实施例中对至少一个数值区间的数量并不作出具体限定,本实施例中所示的第一数值区间和第二数值区间是为了对本方案中目标解析结果和第一数值区间、第二数值区间的比对进行详细说明,并不应视为对本公开技术方案的详细限定。
本实施例中,在从算法决策树中选择目标处理算法和目标处理节点时, 可以利用算法决策树的树状结构,从第一个图像处理节点开始,对图像处理节点的数值区间进行了划分,获得直接关联下一个图像处理节点的第一数值区间,和未直接关联下一个图像处理节点的第二数值区间,通过第二数值区间和第二数值区间的联合应用,可以快速完成目标处理节点以及目标处理节点关联的目标处理算法的选择,提高选择效率和准确度。
在实际应用中,为了对图像进行更详细的图像处理,图像处理节点可以由若干处理节点组形成。通常可以包括图像增强节点、内容提取节点、图像修复节点。
示例性地,图像增强节点可以由分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个。
作为一种可选实施方式,图像处理节点包括:图像增强节点;图像增强节点具体包括:分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个;
分辨率增强节点对应的目标解析结果包括:分辨率值;
降噪节点对应的目标解析结果包括:清晰度和噪声评分;
亮度增强节点对应的目标解析结果包括:亮度值;
色彩校正节点对应的目标解析结果包括:色彩评分。
可选地,图像增强节点除可以包括分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个之外,还可以包括:人像增强节点、图像质量增强节点、人像瘦身节点、色彩增强节点、降噪节点等至少一个节点。通过图像增强节点可以对图像质量进行增强,提高后续图像的处理效果。
本实施例中,通过设置图像增强节点的多个处理节点包括分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个,可以从分辨率、噪声、亮度、色彩等多个维度对图像进行修复,提高图像的增强角度和准确性。
作为又一种可选实施方式,图像处理节点还包括:内容提取节点;内容提取节点对应的目标解析结果包括:人像数量。
可选地,内容提取节点除包含内容提取节点之外,还可以包括:人像区域提取节点、对象区域分割节点、显著性分割节点中的至少一个。
通过一个或多个内容提取节点可以设置不同的提取功能。利用不同的提取功能实现相应的处理算法。
本实施例中,利用内容提取节点对图像进行处理,获得相应的处理节点,通过处理节点实现人像数量的分析,以利用人像数量对图像进行针对性的分析,实现对图像内容的详细分析。
作为又一种可选实施方式,图像处理节点还包括:图像修复节点;图像修复节点包括:图像擦除算法、图像扩展算法、图像裁剪算法、人像瘦身算法中的至少一个。
可选地,图像扩展算法可以包括产品修复算法、漫画修复算法、风景修复算法等,可以根据图像处理类型选择相应的处理算法,实现算法的类型选择,提高算法和图像处理类型的关联度,提高算法处理效率。
本实施例中,对于图像修复节点而言,从图像擦除算法、图像扩展算法、图像裁剪算法、图像瘦身算法中选择任意一个,实现对图像的针对性修复,提高图像的修复效率和准确性。
为了便于理解,参考图6为本公开实施例提供的一种算法决策树的又一个示例图,该图像处理方法中,至少一个解析维度分别对应的图像解析模型可以包括:画质检测算法、亮度检测算法、色彩检测算法、对比度检测算法、美学检测算法、噪声检测算法、人像识别算法等算法。通过上述算法可以检测待处理图像的分辨率、画质清晰度、色彩度、亮度、对比度、美学分数、噪声评分、人像数量等解析结果。上述解析结果可以用于目标处理算法的选择。
其中,算法决策树可以包括:图像增强节点601、内容提取节点602以及图像修复节点603。
示例性地,图像增强节点601可以包括分辨率增强节点6011、清晰度增强节点6012、降噪节点6013、亮度增强节点6014、色彩增强节点6015。
进一步地,图像增加节点601中的第一个处理节点可以为分辨率增强节点6011。分辨率增强节点6011可包括分辨率的三个数值区间,分别为(0,360p)(360p,1080p)以及大于1080p。其中,(0,360p)(360p,1080p)两个数值区间属于第二数值区间,(0,360p)关联4倍超分辨率算法,(360p,1080p) 关联2倍超分辨率算法。而大于1080P为第一数值区间,并未关联图像处理算法,直接连接画质的清晰度增强节点6012。在通过分辨率增强算法对待处理图像进行分辨率增加之后,可以进入降噪节点6013。
降噪节点6013可以包括两个噪声区间,分别为噪声值大于70和噪声值小于70两个数值区间。噪声大于70的第二数值区间关联降噪算法,噪声小于70的第一数值区间直接连接亮度增强节点6014。
而清晰度增强节点6012可以包括大于60和小于60两个数值区间,若噪声分数大于60的第二数值区间直接连接亮度增强节点6014,若噪声分数小于60为第一数值区间,则基于小于60的数值区间关联的画质增强算法对待处理图像进行画质增强。
亮度增强节点6014可以包括两个数值区间,分别为大于30的第一数值区间和小于30的第二数值区间。小于30的第二数值区间关联有亮度增强算法,小于30的第一数值区间直接连接内容提取节点602。
降噪节点6013的第二数值区间关联的降噪算法,降噪算法可以连接色彩增强节点6015。其中,色彩增强节点6015可以包括3个数值区间,分别为色彩值小于25,大于25且小于55,以及大于55。色彩值小于25关联HDR色彩增强算法,为第二数值区间,色彩值大于25且小于55关联色彩校正算法为第二数值区间,而色彩值大于55直接连接内容提取节点502,为其第一数值区间。
内容提取节点602对应的解析结果可以为人像数量,内容提取节点602可以包括3个数值区间,分别为大于1且小于等于2,大于3,以及0。其中,大于1且小于等于2可以关联磨皮算法为第二数值区间,3个以上可以关联人像增强算法作为第二数值区间,为0时直接进入图像修复节点503为第一数值区间。而图像经过磨皮算法或者人像增强算法均可以连接图像修复节点603。其中,图像修复节点603可以按照人像美容类型关联智能瘦身算法。
参考图6所示的算法决策树,假设任意待处理图像的分辨率为256、色彩度为30、亮度为20、对比度为30、美学分数为50、噪声评分为50、人像数量为2。经过分辨率增强节点6011的判断其分辨率256属于(0,360p)的第二数值区间,在分辨率增强节点6011对应的目标处理算法为4倍超分辨率算法。之后,进入4倍超分辨率算法连接的降噪节点6013,噪声评分50属于小 于70的第一数值区间,该不需要进行降噪。之后,进入亮度增强节点6014。亮度值为20,不属于第一数值区间,属于小于30的第二数值区间,亮度增强节点为目标处理节点,可以适用小于30关联的亮度增强算法。之后,进入亮度增强算法连接的内容提取节点602,人像数量为2,则可以确定内容提取节点为目标处理节点,对应的目标处理算法为磨皮算法。之后可以进入图像修复节点603,此图像修复节点603直接关联有瘦身算法,可以确定图像修复节点为目标处理节点,对应的目标处理算法为瘦身算法。
因此,可以确定分辨率增强节点6011、亮度增强节点6014、内容提取节点602以及图像修复节点603为目标处理节点,并可以获得各目标处理算法关联的目标处理算法。
在实际应用中,为了实现图像的顺利处理,可以按照各图像处理节点在算法决策树中的位置顺序进行处理。作为一个实施例,目标处理节点包括至少一个,还包括:
基于至少一个目标处理节点在算法决策树中的位置,确定至少一个目标处理节点分别对应的处理顺序;
基于目标处理算法对待处理图像执行相应的图像处理,获得图像处理结果,包括:
按照至少一个目标处理节点分别对应的处理顺序,顺序执行各目标处理节点的目标处理算法,获得各目标处理节点的节点处理结果,前一个目标处理节点输出的节点处理结果作为后一个目标处理节点的输入;
获得最后一个目标处理节点对应的节点处理结果为图像处理结果。
可选地,目标处理节点在算法决策树中的位置可以指节点在算法决策树中的层数和同一层的顺序。
本实施例中,通过确定至少一个目标处理节点分别对应的处理顺序,分别执行各目标处理节点的目标处理算法,以使得至少个一个目标处理节点按顺序依次执行,实现各目标处理节点的目标处理算法的顺利执行,对至少一个目标处理节点的执行效率和执行可靠性有效提升,提高图像处理效率。
为了对各图像处理节点的目标解析结果准确选择,从至少一个解析结果 中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果,包括:
根据各图像处理节点的处理需求信息和至少一个解析结果分别对应的解析内容信息;
根据至少一个解析结果分别对应的解析内容信息,将与各图像处理节点的处理需求信息相似性最高的解析内容信息对应的解析结果作为各图像处理节点的目标解析结果。
可选地,可以将图像处理节点的处理需求信息分别与至少一个解析结果的解析内容信息进行匹配,获得图像处理节点在至少一个解析结果分别对应的匹配度,以将匹配度最高的解析结果作为图像处理节点的目标解析结果。
处理需求信息与解析内容信息的信息匹配可以包括将处理需求信息和解析内容信息进行分词匹配,获得处理需求信息和解析内容信息的相似度。也即相似度可以指存在相同或相近含义的词语数量或者比例。
本实施例中,为图像处理节点选择解析内容信息时,可以根据图像处理节点的处理需求信息与至少一个解析结果分别对应的解析内容信息分别进行信息的匹配。通过信息的匹配,可以实现各图像处理节点和不同解析维度的适配,实现为各图像处理节点的目标解析结果的准确获取,提升各处理节点的算法选择的准确性。
作为又一个实施例,根据至少一个解析结果,为待处理图像选择目标处理算法之后,还包括:
将目标处理算法从图像处理算法库中调度到算法容器中;
基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像,包括:
在算法容器中,基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
可选地,算法容器(context)可以指为待处理图像建立的独立的算法运行空间。在算法容器可以实现算法的独立运行。算法容器可以是针对待处理图像创建的CUDA(Compute Unified Device Architecture,统一计算架构)上下文。
其中,将目标处理算法从图像处理算法库中调度到算法容器中可以包括:通过算法调度器将目标处理算法从图像处理算法库中调度到算法容器中。
图像处理算法库可以存储训练获得的图像处理算法。将目标处理算法调度到算法容器中可以指将目标处理算法的参数从图像处理算法库调度到算法容器中,实现算法容器对目标处理算法的独立运行,使得待处理图像的图像处理过程独立于其它算法容器,提高算法运行的效率和安全性。
本实施例中,在确定目标处理算法之后,可以将目标处理算法调度到算法容器中,通过算法容器执行目标处理算法,可以令目标处理算法的执行过程独立于其它容器,提高算法的执行效率和安全性。
如图7所示,为本公开实施例提供的一种图像处理装置的一个实施例的结构示意图,该图像处理装置700可以包括以下几个单元:
图像获取单元701:用于响应于图像处理请求,获得待处理的待处理图像。
图像解析单元702:用于对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果。
算法选择单元703:用于根据至少一个解析结果,为待处理图像选择目标处理算法。
图像处理单元704:用于基于目标处理算法对待处理图像执行相应的图像处理,获得图像处理结果。
作为一个实施例,图像解析单元,包括:
模型获取模块,用于获取与各解析维度相匹配的图像解析模型;
图像解析模块,用于将待处理图像分别输入各解析维度的图像解析模型进行解析计算,获得待处理图像在至少一个解析维度分别对应的解析结果,得到待处理图像的至少一个解析结果。
作为又一个实施例,算法选择单元,包括:
第一确定模块,用于确定算法决策树中的至少一个图像处理节点,算法决策树由至少一个图像处理节点按照各节点的算法选择策略建立连接;
结果选择模块,用于从至少一个解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果;
结果获取模块,用于利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点;
算法选择模块,用于按照算法选择策略选择目标处理节点的目标处理算法,以获得待处理图像的目标处理算法。
在某些实施例中,结果获取模块,包括:
条件判断子模块,用于按照自顶向下的顺序,从算法决策树的第一个图像处理节点开始,判断图像处理节点的目标解析结果是否满足算法执行条件;
第一处理子模块,用于若确定图像处理节点的目标解析结果满足算法执行条件,则确定图像处理节点为目标处理节点,并在按照算法选择策略选择目标处理节点的目标处理算法之后,进入目标处理算法关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件;
第二处理子模块,用于若确定图像处理节点的目标解析结果不满足算法执行条件,则进入图像处理节点关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件;
目标获取子模块,用于获取算法决策树遍历结束时得到的目标处理节点。
作为一个实施例,算法选择策略包括图像处理节点的至少一个数值区间,各数值区间关联图像处理算法或下一个图像处理节点,且各数值区间的图像处理算法关联下一个图像处理节点;
结果获取模块,包括:
第一确定子模块,用于确定各图像处理节点的算法选择策略对应的至少一个数值区间;
区间获取子模块,应用于从各图像处理节点对应的至少一个数值区间中确定直接关联下一个图像处理节点的第一数值区间;
节点遍历子模块,用于遍历算法决策树,若任一个图像处理节点的目标解析结果不属于图像处理节点的第一数值区间,则确定图像处理节点为满足算法执行条件的目标处理节点,获得算法决策树汇总的目标处理节点。
作为一个实施例,算法选择模块,包括:
第二确定子模块,用于确定算法选择策略对应至少一个数值区间中除第一数值区间之外的第二数值区间;
第三确定子模块,用于基于目标处理节点的目标解析结果,从第二数值 区间中确定目标解析结果所属的目标数值区间;
算法关联子模块,用于将目标数值区间关联的图像处理算法作为目标处理节点的目标处理算法。
作为一种可选实施方式,图像处理节点包括:图像增强节点;图像增强节点具体包括:分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个;
分辨率增强节点对应的目标解析结果包括:分辨率值;
降噪节点对应的目标解析结果包括:清晰度和噪声评分;
亮度增强节点对应的目标解析结果包括:亮度值;
色彩校正节点对应的目标解析结果包括:色彩评分。
作为又一种可选实施方式,图像处理节点还包括:内容提取节点;内容提取节点对应的目标解析结果包括:人像数量。
作为又一种可选实施方式,图像处理节点还包括:图像修复节点;图像修复节点包括:图像擦除算法、图像扩展算法、图像裁剪算法、人像瘦身算法中的至少一个。
作为一个实施例,目标处理节点包括至少一个,还包括:
顺序确定单元,用于基于至少一个目标处理节点在算法决策树中的位置,确定至少一个目标处理节点分别对应的处理顺序;
图像处理单元,包括:
顺序执行模块,用于按照至少一个目标处理节点分别对应的处理顺序,顺序执行各目标处理节点的目标处理算法,获得各目标处理节点的节点处理结果,前一个目标处理节点输出的节点处理结果作为后一个目标处理节点的输入;
结果获取模块,用于获得最后一个目标处理节点对应的节点处理结果为图像处理结果。
在某些实施例中,结果选择模块,包括:
内容获取子模块,用于根据各图像处理节点的处理需求信息和至少一个解析结果分别对应的解析内容信息;
信息匹配子模块,用于根据至少一个解析结果分别对应的解析内容信息,将与各图像处理节点的处理需求信息相似性最高的解析内容信息对应的解析 结果作为各图像处理节点的目标解析结果。
作为一个实施例,还包括:
算法调度单元,用于将目标处理算法从图像处理算法库中调度到算法容器中;
图像处理单元,包括:
容器执行模块,用于在算法容器中,基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
本实施例提供的装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
为了实现上述实施例,本公开实施例还提供了一种电子设备。
参考图8,其示出了适于用来实现本公开实施例的电子设备800的结构示意图,该电子设备800可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(Read Only Memory,简称ROM)802中的程序或者从存储装置808加载到随机访问存储器(Random Access Memory,简称RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换 数据。虽然图8示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的节点可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,节点的名称在某种情况下并不构成对该节点本身的限定,例如,第一获取节点还可以被描述为“获取至少两个网际协议地址的节点”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结 合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种图像处理方法,包括:
响应于图像处理请求,获得待处理图像;
对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果;
根据至少一个解析结果,为待处理图像选择目标处理算法;
基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
根据本公开的一个或多个实施例,对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果,包括:
获取与各解析维度相匹配的图像解析模型;
将待处理图像分别输入各解析维度的图像解析模型进行解析计算,获得待处理图像在至少一个解析维度分别对应的解析结果,得到待处理图像的至少一个解析结果。
根据本公开的一个或多个实施例,根据至少一个解析结果,为待处理图像选择目标处理算法,包括:
确定算法决策树中的至少一个图像处理节点,算法决策树由至少一个图像处理节点按照各节点的算法选择策略建立连接;
从至少一个解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果;
利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点;
按照算法选择策略选择目标处理节点的目标处理算法,以获得待处理图 像的目标处理算法。
根据本公开的一个或多个实施例,利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点,包括:
按照自顶向下的顺序,从算法决策树的第一个图像处理节点开始,判断图像处理节点的目标解析结果是否满足算法执行条件;
若确定图像处理节点的目标解析结果满足算法执行条件,则确定图像处理节点为目标处理节点,并在按照算法选择策略选择目标处理节点的目标处理算法之后,进入目标处理算法关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件;
若确定图像处理节点的目标解析结果不满足算法执行条件,则进入图像处理节点关联的下一个图像处理节点,继续执行判断图像处理节点的目标解析结果是否满足算法执行条件;
获取算法决策树遍历结束时得到的目标处理节点。
根据本公开的一个或多个实施例,算法选择策略包括图像处理节点的至少一个数值区间,各数值区间关联图像处理算法或下一个图像处理节点,且各数值区间的图像处理算法关联下一个图像处理节点;
利用各图像处理节点的目标解析结果,确定算法决策树中满足算法执行条件的目标处理节点,包括:
确定各图像处理节点的算法选择策略对应的至少一个数值区间;
从各图像处理节点对应的至少一个数值区间中确定直接关联下一个图像处理节点的第一数值区间;
遍历算法决策树,若任一个图像处理节点的目标解析结果不属于图像处理节点的第一数值区间,则确定图像处理节点为满足算法执行条件的目标处理节点,获得算法决策树汇总的目标处理节点。
根据本公开的一个或多个实施例,按照算法选择策略选择目标处理节点的目标处理算法,包括:
确定算法选择策略对应至少一个数值区间中除第一数值区间之外的第二数值区间;
基于目标处理节点的目标解析结果,从第二数值区间中确定目标解析结果所属的目标数值区间;
将目标数值区间关联的图像处理算法作为目标处理节点的目标处理算法。
根据本公开的一个或多个实施例,图像处理节点包括:图像增强节点;图像增强节点具体包括:分辨率增强节点、降噪节点、亮度增强节点、色彩校正节点中的至少一个;
分辨率增强节点对应的目标解析结果包括:分辨率值;
降噪节点对应的目标解析结果包括:清晰度和噪声评分;
亮度增强节点对应的目标解析结果包括:亮度值;
色彩校正节点对应的目标解析结果包括:色彩评分。
根据本公开的一个或多个实施例,图像处理节点还包括:内容提取节点;内容提取节点对应的目标解析结果包括:人像数量。
根据本公开的一个或多个实施例,图像处理节点还包括:图像修复节点;图像修复节点包括:图像擦除算法、图像扩展算法、图像裁剪算法、人像瘦身算法中的至少一个。
根据本公开的一个或多个实施例,目标处理节点包括至少一个,还包括:
基于至少一个目标处理节点在算法决策树中的位置,确定至少一个目标处理节点分别对应的处理顺序;
基于目标处理算法对待处理图像执行相应的图像处理,获得图像处理结果,包括:
按照至少一个目标处理节点分别对应的处理顺序,顺序执行各目标处理节点的目标处理算法,获得各目标处理节点的节点处理结果,前一个目标处理节点输出的节点处理结果作为后一个目标处理节点的输入;
获得最后一个目标处理节点对应的节点处理结果为图像处理结果。
根据本公开的一个或多个实施例,从至少一个解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果,包括:
根据各图像处理节点的处理需求信息和至少一个解析结果分别对应的解析内容信息;
根据至少一个解析结果分别对应的解析内容信息,将与各图像处理节点的处理需求信息相似性最高的解析内容信息对应的解析结果作为各图像处理节点的目标解析结果。
根据本公开的一个或多个实施例,根据至少一个解析结果,为待处理图 像选择目标处理算法之后,还包括:
将目标处理算法从图像处理算法库中调度到算法容器中;
基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像,包括:
在算法容器中,基于目标处理算法对待处理图像执行相应的图像改进处理,获得改进后的目标图像。
第二方面,根据本公开的一个或多个实施例,提供了图像处理装置,包括:
图像获取单元,用于响应于图像处理请求,获得待处理的待处理图像;
图像解析单元,用于对待处理图像从至少一个解析维度进行解析,获得待处理图像的至少一个解析结果;
算法选择单元,用于根据至少一个解析结果,为待处理图像选择目标处理算法;
图像处理单元,用于基于目标处理算法对待处理图像执行相应的图像处理,获得图像处理结果。
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:至少一个处理器和存储器;
存储器存储计算机执行指令;
至少一个处理器执行存储器存储的计算机执行指令,使得至少一个处理器执行如上第一方面以及第一方面各种可能的设计的图像处理方法。
第四方面,根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当处理器执行计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计的图像处理方法。
第五方面,根据本公开的一个或多个实施例,提供了一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下, 由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (16)

  1. 一种图像处理方法,其特征在于,包括:
    响应于图像处理请求,获得待处理图像;
    对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像的至少一个解析结果;
    根据所述至少一个解析结果,为所述待处理图像选择目标处理算法;
    基于所述目标处理算法对所述待处理图像执行相应的图像改进处理,获得改进后的目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像的至少一个解析结果,包括:
    获取与各解析维度相匹配的图像解析模型;
    将所述待处理图像分别输入各解析维度的图像解析模型进行解析计算,获得所述待处理图像在至少一个所述解析维度分别对应的解析结果,得到所述待处理图像的至少一个解析结果。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述至少一个解析结果,为所述待处理图像选择目标处理算法,包括:
    确定算法决策树中的至少一个图像处理节点,所述算法决策树由至少一个所述图像处理节点按照各节点的算法选择策略建立连接;
    从所述至少一个所述解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果;
    利用各图像处理节点的目标解析结果,确定所述算法决策树中满足算法执行条件的目标处理节点;
    按照所述算法选择策略选择所述目标处理节点的目标处理算法,以获得所述待处理图像的目标处理算法。
  4. 根据权利要求3所述的方法,其特征在于,所述利用各图像处理节点的目标解析结果,确定所述算法决策树中满足算法执行条件的目标处理节点,包括:
    按照自顶向下的顺序,从所述算法决策树的第一个图像处理节点开始,判断所述图像处理节点的目标解析结果是否满足算法执行条件;
    若确定所述图像处理节点的目标解析结果满足算法执行条件,则确定所述图像处理节点为目标处理节点,并在按照所述算法选择策略选择所述目标处理节点的目标处理算法之后,进入所述目标处理算法关联的下一个图像处理节点,继续执行判断所述图像处理节点的目标解析结果是否满足算法执行条件;
    若确定所述图像处理节点的目标解析结果不满足算法执行条件,则进入所述图像处理节点关联的下一个图像处理节点,继续执行判断所述图像处理节点的目标解析结果是否满足算法执行条件;
    获取所述算法决策树遍历结束时得到的目标处理节点。
  5. 根据权利要求3所述的方法,其特征在于,所述算法选择策略包括图像处理节点的至少一个数值区间,各数值区间关联图像处理算法或下一个图像处理节点,且各数值区间的图像处理算法关联下一个图像处理节点;
    所述利用各图像处理节点的目标解析结果,确定所述算法决策树中满足算法执行条件的目标处理节点,包括:
    确定各图像处理节点的算法选择策略对应的至少一个数值区间;
    从各图像处理节点对应的至少一个数值区间中确定直接关联下一个图像处理节点的第一数值区间;
    遍历所述算法决策树,若任一个图像处理节点的目标解析结果不属于所述图像处理节点的第一数值区间,则确定所述图像处理节点为满足算法执行条件的目标处理节点,获得所述算法决策树汇总的目标处理节点。
  6. 根据权利要求5所述的方法,其特征在于,所述按照所述算法选择策略选择所述目标处理节点的目标处理算法,包括:
    确定所述算法选择策略对应至少一个所述数值区间中除所述第一数值区间之外的第二数值区间;
    基于所述目标处理节点的目标解析结果,从所述第二数值区间中确定所述目标解析结果所属的目标数值区间;
    将所述目标数值区间关联的图像处理算法作为所述目标处理节点的目标处理算法。
  7. 根据权利要求3所述的方法,其特征在于,所述图像处理节点包括:图像增强节点;所述图像增强节点具体包括:分辨率增强节点、降噪节点、亮 度增强节点、色彩校正节点中的至少一个;
    所述分辨率增强节点对应的目标解析结果包括:分辨率值;
    所述降噪节点对应的目标解析结果包括:清晰度和噪声评分;
    所述亮度增强节点对应的目标解析结果包括:亮度值;
    所述色彩校正节点对应的目标解析结果包括:色彩评分。
  8. 根据权利要求3所述的方法,其特征在于,所述图像处理节点还包括:内容提取节点;所述内容提取节点对应的目标解析结果包括:人像数量。
  9. 根据权利要求3所述的方法,其特征在于,所述图像处理节点还包括:图像修复节点;所述图像修复节点包括:图像擦除算法、图像扩展算法、图像裁剪算法、人像瘦身算法中的至少一个。
  10. 根据权利要求3所述的方法,其特征在于,所述目标处理节点包括至少一个,还包括:
    基于至少一个所述目标处理节点在所述算法决策树中的位置,确定至少一个所述目标处理节点分别对应的处理顺序;
    所述基于所述目标处理算法对所述待处理图像执行相应的图像处理,获得图像处理结果,包括:
    按照所述至少一个所述目标处理节点分别对应的处理顺序,顺序执行各目标处理节点的目标处理算法,获得各目标处理节点的节点处理结果,前一个目标处理节点输出的节点处理结果作为后一个目标处理节点的输入;
    获得最后一个目标处理节点对应的节点处理结果为所述图像处理结果。
  11. 根据权利要求3所述的方法,其特征在于,所述从所述至少一个所述解析结果中为各图像处理节点选择目标解析结果,获得各图像处理节点对应的目标解析结果,包括:
    根据各图像处理节点的处理需求信息和至少一个所述解析结果分别对应的解析内容信息;
    根据至少一个所述解析结果分别对应的解析内容信息,将与各图像处理节点的处理需求信息相似性最高的解析内容信息对应的解析结果作为各图像处理节点的目标解析结果。
  12. 根据权利要求1至11任一项所述的方法,其特征在于,所述根据所述至少一个解析结果,为所述待处理图像选择目标处理算法之后,还包括:
    将所述目标处理算法从图像处理算法库中调度到算法容器中;
    所述基于所述目标处理算法对所述待处理图像执行相应的图像改进处理,获得改进后的目标图像,包括:
    在所述算法容器中,基于所述目标处理算法对所述待处理图像执行相应的图像改进处理,获得改进后的目标图像。
  13. 一种图像处理装置,其特征在于,包括:
    图像获取单元,用于响应于图像处理请求,获得待处理的待处理图像;
    图像解析单元,用于对所述待处理图像从至少一个解析维度进行解析,获得所述待处理图像的至少一个解析结果;
    算法选择单元,用于根据所述至少一个解析结果,为所述待处理图像选择目标处理算法;
    图像处理单元,用于基于所述目标处理算法对所述待处理图像执行相应的图像处理,获得图像处理结果。
  14. 一种电子设备,其特征在于,包括:处理器、存储器;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器配置有如权利要求1至12任一项所述的图像处理方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至12任一项所述的图像处理方法。
  16. 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行,以配置有如权利要求1至12任一项所述的图像处理方法。
PCT/CN2023/129171 2022-11-02 2023-11-01 图像处理方法、装置、设备、介质及产品 WO2024094086A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211367028.XA CN115830362A (zh) 2022-11-02 2022-11-02 图像处理方法、装置、设备、介质及产品
CN202211367028.X 2022-11-02

Publications (1)

Publication Number Publication Date
WO2024094086A1 true WO2024094086A1 (zh) 2024-05-10

Family

ID=85526347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/129171 WO2024094086A1 (zh) 2022-11-02 2023-11-01 图像处理方法、装置、设备、介质及产品

Country Status (2)

Country Link
CN (1) CN115830362A (zh)
WO (1) WO2024094086A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830362A (zh) * 2022-11-02 2023-03-21 抖音视界有限公司 图像处理方法、装置、设备、介质及产品

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217328A1 (en) * 2013-09-30 2016-07-28 Danielle YANAI Image and video processing and optimization
CN107633480A (zh) * 2017-09-14 2018-01-26 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN111696064A (zh) * 2020-06-15 2020-09-22 北京金山云网络技术有限公司 图像处理方法、装置、电子设备及计算机可读介质
CN113674159A (zh) * 2020-05-15 2021-11-19 北京三星通信技术研究有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114066828A (zh) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 一种基于多功能底层算法的图像处理方法及系统
CN114972021A (zh) * 2022-04-13 2022-08-30 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN115830362A (zh) * 2022-11-02 2023-03-21 抖音视界有限公司 图像处理方法、装置、设备、介质及产品

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217328A1 (en) * 2013-09-30 2016-07-28 Danielle YANAI Image and video processing and optimization
CN107633480A (zh) * 2017-09-14 2018-01-26 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN113674159A (zh) * 2020-05-15 2021-11-19 北京三星通信技术研究有限公司 图像处理方法、装置、电子设备及可读存储介质
CN111696064A (zh) * 2020-06-15 2020-09-22 北京金山云网络技术有限公司 图像处理方法、装置、电子设备及计算机可读介质
CN114066828A (zh) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 一种基于多功能底层算法的图像处理方法及系统
CN114972021A (zh) * 2022-04-13 2022-08-30 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN115830362A (zh) * 2022-11-02 2023-03-21 抖音视界有限公司 图像处理方法、装置、设备、介质及产品

Also Published As

Publication number Publication date
CN115830362A (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
CN111476309B (zh) 图像处理方法、模型训练方法、装置、设备及可读介质
CN111368685B (zh) 关键点的识别方法、装置、可读介质和电子设备
CN110321958B (zh) 神经网络模型的训练方法、视频相似度确定方法
WO2020024484A1 (zh) 用于输出数据的方法和装置
CN111696176B (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN108573286A (zh) 一种理赔业务的数据处理方法、装置、设备及服务器
WO2024094086A1 (zh) 图像处理方法、装置、设备、介质及产品
US20220100576A1 (en) Video processing method and device, electronic equipment and storage medium
WO2020151491A1 (zh) 图像形变的控制方法、装置和硬件装置
CN110287816B (zh) 车门动作检测方法、装置和计算机可读存储介质
CN110619656B (zh) 基于双目摄像头的人脸检测跟踪方法、装置及电子设备
CN109409241A (zh) 视频核验方法、装置、设备及可读存储介质
JP2023526899A (ja) 画像修復モデルを生成するための方法、デバイス、媒体及びプログラム製品
CN113610034B (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
US20210295016A1 (en) Living body recognition detection method, medium and electronic device
WO2024061311A1 (zh) 模型训练方法、图像分类方法和装置
US11443537B2 (en) Electronic apparatus and controlling method thereof
CN110619602B (zh) 一种图像生成方法、装置、电子设备及存储介质
CN111783632B (zh) 针对视频流的人脸检测方法、装置、电子设备及存储介质
CN110795993A (zh) 一种构建模型的方法、装置、终端设备及介质
CN110781809A (zh) 基于注册特征更新的识别方法、装置及电子设备
CN115410133A (zh) 视频密集预测方法及其装置
CN110619597A (zh) 一种半透明水印去除方法、装置、电子设备及存储介质
CN116310615A (zh) 图像处理方法、装置、设备及介质
CN111507143B (zh) 表情图像效果生成方法、装置和电子设备