CN114266694A - Image processing method, apparatus and computer storage medium - Google Patents

Image processing method, apparatus and computer storage medium Download PDF

Info

Publication number
CN114266694A
CN114266694A CN202111580672.0A CN202111580672A CN114266694A CN 114266694 A CN114266694 A CN 114266694A CN 202111580672 A CN202111580672 A CN 202111580672A CN 114266694 A CN114266694 A CN 114266694A
Authority
CN
China
Prior art keywords
image
raw
processed
map
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111580672.0A
Other languages
Chinese (zh)
Inventor
姚远
崔苗苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111580672.0A priority Critical patent/CN114266694A/en
Publication of CN114266694A publication Critical patent/CN114266694A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides an image processing method, image processing equipment and a computer storage medium. The image processing method comprises the following steps: acquiring a RAW image to be processed; determining scene information corresponding to the RAW map to be processed; determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information; and processing the RAW image to be processed based on the adjusting parameter to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed. The technical scheme provided by the embodiment can effectively realize that the RAW image to be processed can be automatically converted into the target image, so that the delivery efficiency of later-stage image repair is improved.

Description

Image processing method, apparatus and computer storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to an image processing method and apparatus, and a computer storage medium.
Background
A RAW format image is a RAW data file generated from a camera exposure file with little processing, which records some metadata generated by camera shooting, such as: information on the sensitivity (ISO) setting, shutter speed, aperture value, white balance, and the like is also called "digital negative".
The RAW file transfer is a link of professional-level photography later-stage image correction, and aims to convert a shot RAW image into a JPG original film with rich details and accurate colors, so that a repairman can conveniently perform personalized and refined beautifying operation based on the JPG original film. At present, a repairman mainly utilizes professional software to perform manual file transfer operation, the whole process consumes time and is laborious, and the delivery efficiency of later-stage image repairing is greatly reduced.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, and a computer storage medium, which effectively implement automatic conversion of a RAW image into a target image in a preset format, and thus are beneficial to improving the quality and efficiency of image processing in a later stage.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a RAW image to be processed;
determining scene information corresponding to the RAW map to be processed;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
and processing the RAW image to be processed based on the adjusting parameter to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the first acquisition module is used for acquiring a RAW image to be processed;
the first determining module is used for determining scene information corresponding to the RAW image to be processed;
the first determining module is configured to determine, based on the scene information, an adjustment parameter corresponding to the RAW map to be processed;
and the first processing module is used for processing the RAW image to be processed based on the adjusting parameter to obtain a target image, and the format of the target image is different from that of the RAW image to be processed.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer implement the image processing method in the first aspect when executed.
In a fifth aspect, an embodiment of the present invention provides a computer program product, including: a computer program which, when executed by a processor of an electronic device, causes the processor to carry out the steps of the image processing method according to the first aspect.
In a sixth aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a RAW image to be processed, wherein the RAW image to be processed comprises scene information of wearable equipment;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
processing the RAW image to be processed based on the adjusting parameter to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed;
displaying the target image through the wearable device.
In a seventh aspect, an embodiment of the present invention provides an image processing apparatus, including:
the second acquisition module is used for acquiring a RAW image to be processed, wherein the RAW image to be processed comprises scene information where the wearable equipment is located;
a second determining module, configured to determine, based on the scene information, an adjustment parameter corresponding to the RAW map to be processed;
the second processing module is used for processing the RAW image to be processed based on the adjusting parameter to obtain a target image, and the format of the target image is different from that of the RAW image to be processed;
and the second display module is used for displaying the target image through the wearable equipment.
In an eighth aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of the sixth aspect.
In a ninth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, which, when executed by a computer, implements the image processing method in the sixth aspect.
In a tenth aspect, an embodiment of the present invention provides a computer program product, including: a computer program which, when executed by a processor of an electronic device, causes the processor to carry out the steps in the image processing method shown in the sixth aspect described above.
In the technical solution provided by this embodiment, by obtaining a RAW image to be processed, scene information corresponding to the RAW image to be processed is determined, then, based on the scene information, determining an adjusting parameter corresponding to the RAW map to be processed, and based on the adjusting parameter, performing conversion processing on the RAW map to be processed, thereby effectively realizing that the RAW image can be automatically converted into the target image with the preset format without manual operation, thereby improving the quality and efficiency of the shift of the RAW chart, being beneficial to improving the delivery efficiency of the later repair chart, in addition, because different scene information can correspond to different adjusting parameters, the RAW map processing operation in different scenes can be realized based on different adjusting parameters, thereby not only meeting the actual requirements of different users, and the expansibility and controllability of the method are improved, and the practicability of the method is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of performing conversion processing on the RAW image to be processed based on the adjustment parameter to obtain a target image according to the embodiment of the present invention;
FIG. 4 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device corresponding to the image processing apparatus provided in the embodiment shown in fig. 6;
FIG. 8 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device corresponding to the image processing apparatus provided in the embodiment shown in fig. 9.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any inventive step, are intended to be protected by the present invention but do not exclude at least one. It is to be understood that the term "and/or" range "is used herein.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plural" generally includes at least two, "only one describes an associated relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
Definition of terms:
RAW format image: a raw data file, generated from a camera exposure file, that has been processed little by little, records some metadata generated by camera shots, such as: information on the setting of the sensitivity (ISO for short), shutter speed, aperture value, white balance, and the like is also called "digital film".
In order to facilitate understanding of specific implementation processes and implementation effects of the technical solutions in the present application, the following briefly describes related technologies:
in the field of professional photography, the RAW image has more image information than the JPG image, a wider color gamut and a wider dynamic range are reserved, and more free space can be brought into play by a reviewer during post-processing, so the professional photographer usually chooses to use the image in the RAW format for shooting, RAW conversion is a process of converting the RAW image into a JPG original with rich details and accurate colors, and the reviewer can perform post-operations such as personalized style filter beautification, character beautification and the like based on the original after RAW conversion.
In the field of professional photography retouching, RAW is manually processed, in actual operation, after a group of images are shot, a retouching engineer can use professional software (such as Camera RAW) to manually adjust the shot RAW in batches, in the manual adjustment process, the retouching engineer can divide the images to be processed into a plurality of groups according to the tone style, shooting scenes and other conditions needing to be adjusted, and a set of color and exposure parameters are adjusted for each group to improve the working efficiency. The specific steps may include: (1) adjusting color and exposure of a certain image of the same shooting scene independently to obtain a standard original film after file transfer; (2) and applying the adjustment parameters corresponding to the standard original sheets to the other images in batch, and performing fine adjustment on inconsistent images.
However, in the above implementation manner of performing RAW shift based on the fixed parameters corresponding to the standard original film, multiple sets of adjustment parameters need to be provided for different shooting scenes and target effects, that is, the standard original film corresponding to different shooting scenes is obtained. As hundreds of changes may exist in an actual scene, the implementation method has poor expandability and controllability, and cannot meet actual requirements. Moreover, manual gear shifting operation consumes time and labor, and the delivery efficiency of later-stage image repairing is greatly reduced.
In order to solve the above technical problem, the present embodiment provides an image processing method, an apparatus and a computer storage medium, wherein an execution subject of the image processing method is an image processing device, and the image processing device may be communicatively connected with an image capture device and the like. Specifically, the method comprises the following steps:
the image capturing device may be an electronic device capable of performing an image capturing operation and obtaining an image in a RAW format, and specifically, the image capturing device may be implemented as a mobile phone, a tablet computer, a camera, a video camera, or other devices with shooting capabilities.
The image processing apparatus is a device that can provide an image processing service in a network virtual environment, and generally refers to an apparatus that performs information planning and image processing operations using a network. In physical implementation, the image processing apparatus may be any device capable of providing a computing service, responding to a service request, and performing processing, such as: can be cluster servers, regular servers, cloud hosts, virtual centers, and the like. The image processing apparatus mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture.
In the above embodiment, the image capturing device may be in network connection with the image processing device, and the network connection may be a wireless or wired network connection. If the image acquisition device is in communication connection with the image processing device, the network system of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, 5G, and the like.
In the embodiment of the present application, the image capturing device may implement an image capturing operation, so that a RAW map to be processed may be generated, and in order to convert the RAW map to be processed into an image in a JPG format, the generated RAW map to be processed may be sent to the image processing device.
The image processing device is configured to acquire a RAW image to be processed, determine scene information corresponding to the RAW image to be processed, and then determine an adjustment parameter corresponding to the RAW image to be processed based on the scene information, where it is noted that different scene information may determine different adjustment parameters, and then perform conversion processing on the RAW image to be processed based on the adjustment parameter, so that the RAW image to be processed may be effectively converted into a target image automatically, where a format of the target image is different from a format of the RAW image to be processed, for example: the target image can be an image in a JGP format, so that a user can conveniently edit, view or repair the target image.
In addition, because different scene information can correspond to different adjusting parameters, when image processing operation is performed based on different adjusting parameters, the quality and the efficiency of target image generation are effectively ensured, the actual requirements of different users can be met, the expansibility and the controllability of the method are also improved, and the practicability of the method is further improved.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below may be combined with each other without conflict between the embodiments. In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention; referring to fig. 2, the present embodiment provides an image processing method, an execution subject of the method may be an image processing apparatus, the image processing apparatus may be implemented as software, or a combination of software and hardware, and in some examples, the image processing apparatus may be an apparatus capable of implementing an image capturing operation. Specifically, the image processing method may include the steps of:
step S201: and acquiring a RAW image to be processed.
Step S202: and determining scene information corresponding to the RAW map to be processed.
Step S203: and determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information.
Step S204: and converting the RAW image to be processed based on the adjusting parameters to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
The above steps are explained in detail below:
step S201: and acquiring a RAW image to be processed.
The RAW map to be processed may refer to an image in a RAW format that needs to be processed, and the RAW map to be processed may be at least one of the following: CR2 format, NEF format, ARW format, etc., which may be embodied as RAW data file. Specifically, the specific implementation manner of obtaining the RAW map to be processed is not limited in this embodiment, and a person skilled in the art may set the RAW map to be processed according to a specific application scenario or an application requirement, in some examples, the RAW map to be processed may be stored in a preset area or preset equipment, and the RAW map to be processed may be obtained by accessing the preset area or the preset equipment. In other examples, the RAW map to be processed may be generated based on an image acquisition device, at this time, the image acquisition device is in communication connection with the image processing device, and when the image acquisition device performs an image acquisition operation, the RAW map to be processed may be generated, and then the generated RAW map to be processed may be sent to the image processing device, so that the image processing device may obtain the RAW map to be processed.
Step S202: and determining scene information corresponding to the RAW map to be processed.
After the RAW map to be processed is acquired, the RAW map to be processed may be analyzed to determine scene information corresponding to the RAW map to be processed, where the scene information may include at least one of scene color (orange scene, blue scene, white scene, red scene, etc.), scene style (realistic style, classical style, aesthetic style, etc.), scene color temperature (cool tone, warm tone), and the like.
In addition, the specific determination manner of the scene information in this embodiment is not limited, and a person skilled in the art may set the determination manner according to a specific application scene or an application requirement, and in some examples, the determining the scene information corresponding to the RAW map to be processed in this embodiment may include: acquiring a thumbnail corresponding to a RAW image to be processed; determining an image feature corresponding to the thumbnail; based on the image features, scene information is determined.
Specifically, since the RAW image to be processed cannot be directly viewed by the user, which is inconvenient for directly processing the RAW image to be processed, in order to accurately determine the scene information corresponding to the RAW image to be processed, the RAW image to be processed may be processed first, and specifically, the RAW image to be processed may be compressed by using a preset image compression algorithm, so that a thumbnail corresponding to the RAW image to be processed may be obtained, the thumbnail may enable the user to preview the original effect of the image acquired by the image acquisition device, after the thumbnail is obtained, the thumbnail may be subjected to a feature extraction operation by using a preset algorithm or a machine learning model, so that the image feature corresponding to the thumbnail may be obtained, and since the mapping relationship between different image features and different scene information is pre-configured, after the image feature corresponding to the thumbnail is obtained, the image characteristics can be analyzed and processed by utilizing the preset mapping relation to determine the scene information corresponding to the RAW image to be processed, so that the accuracy and reliability of determining the scene information are effectively realized.
In other examples, determining scene information corresponding to the RAW map to be processed may include: acquiring a thumbnail corresponding to a RAW image to be processed and a deep learning model, wherein the deep learning model is trained to be used for determining scene information of the image; and inputting the thumbnail into the deep learning model to obtain scene information corresponding to the RAW image to be processed.
When the deep learning model is generated through training, a plurality of training images can be obtained first, label information (namely standard scene information) corresponding to each training image is extracted from each training image, and learning training is performed on the basis of the depth features and the corresponding label information, so that the deep learning model can be obtained. After the RAW image to be processed is acquired, the RAW image to be processed may be processed first, specifically, the RAW image to be processed may be compressed by using a preset image compression algorithm, so that a thumbnail corresponding to the RAW image to be processed may be obtained, the thumbnail may enable a user to preview an original effect of an image acquired by an image acquisition device, after the thumbnail is acquired, the thumbnail may be input to a deep learning model, and then the deep learning model may analyze and process the thumbnail, and the deep learning model may determine and output scene information corresponding to the RAW image to be processed, thereby effectively ensuring accuracy and reliability of determining the scene information.
Step S203: and determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information.
Since the RAW maps generated in different scenes may correspond to different adjustment parameters, when performing an image processing operation on the RAW map to be processed, a plurality of adjustment parameters for processing the RAW map to be processed in different scenes are pre-configured, where the adjustment parameters may include at least one of: a color parameter, an exposure parameter, a contrast parameter, etc. In order to ensure the quality and effect of image processing, after the scene information corresponding to the RAW image to be processed is acquired, the scene information may be analyzed to determine the adjustment parameter corresponding to the RAW image to be processed. Specifically, determining the adjustment parameter corresponding to the RAW map to be processed may include: acquiring a mapping relation between scene information and adjusting parameters, and determining the adjusting parameters corresponding to the RAW image to be processed based on the mapping relation and the scene information, wherein one scene information corresponds to a unique adjusting parameter.
Step S204: and converting the RAW image to be processed based on the adjusting parameters to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
After the adjustment parameters are obtained, conversion processing can be performed on the RAW image to be processed based on the adjustment parameters, specifically, when the adjustment parameters include color parameters, the color of the RAW image to be processed can be adjusted based on the color parameters; when the adjustment parameter includes the exposure parameter, the exposure of the RAW image to be processed may be adjusted based on the exposure parameter, so that the target image may be obtained. The format of the acquired target image is different from that of the RAW image to be processed, wherein the target image may be an image in an sRGB space or a YUV space, and when the target image is an image in the sRGB space, the format of the target image may include any one of the following: JPG format, PNG format, BMP format, eps format.
It should be noted that, when the target image is an image in the YUV space, the RAW image to be processed needs to be converted into an intermediate image in the sRGB space, and then the intermediate image in the sRGB space needs to be converted into a target image in the YUV space, so that the quality and effect of generating the target image in the YUV space are effectively ensured.
In some examples, after acquiring the target image, the method in this embodiment may further include: and displaying the target image by using the display area, so that a user can intuitively know the image quality and the image effect after the gear shifting processing is carried out on the RAW image to be processed.
Further, after the target image is displayed by using the display area, prompt information interacting with the user may be generated, and the corresponding processing operation may be performed on the target image in response to an execution operation input by the user based on the prompt information.
Specifically, after the target image is generated, the effect of the target image can be previewed through the display area, so that the user can visually check the quality and the effect of image processing, and when the user is satisfied with the quality and the effect of the target image, the user can input the confirmation saving operation based on the prompt information, so that the target image can be stored based on the confirmation saving operation; when the user is not satisfied with the quality and effect of the target image, the cancel operation can be input based on the prompt information, so that the target image can be deleted based on the cancel operation, and the conversion processing operation can be carried out on the RAW image to be processed again by the cancel operation; or, the image processing method can enter a manual adjustment mode based on the cancel operation, so that the user can perform manual adjustment operation on the target image, and further can acquire the image meeting the user requirement.
In the image processing method provided by this embodiment, the RAW image to be processed is obtained, the scene information corresponding to the RAW image to be processed is determined, the adjustment parameter corresponding to the RAW image to be processed is determined based on the scene information, and the RAW image to be processed is converted based on the adjustment parameter, so that the RAW image can be automatically converted into the target image in the preset format without manual operation, the quality and efficiency of image processing are improved, the delivery efficiency of the post-repair image is improved, in addition, different scene information can correspond to different adjustment parameters, and therefore, the RAW images in different scenes can be processed based on different adjustment parameters, so that the quality and efficiency of target image generation are effectively ensured, the actual requirements of different users can be met, and the expansibility and controllability of the method are also improved, further improving the practicability of the method.
Fig. 3 is a schematic flow chart illustrating that a RAW image to be processed is converted based on an adjustment parameter to obtain a target image according to an embodiment of the present invention; referring to fig. 3, this embodiment provides an implementation manner of performing conversion processing on a RAW map to be processed, and specifically, the performing conversion processing on the RAW map to be processed based on an adjustment parameter in this embodiment to obtain a target image may include:
step S301: and converting the RAW image to be processed into a standard red, green and blue sRGB space to obtain an intermediate image.
After the RAW image to be processed is obtained, the RAW image to be processed may be converted into a standard red, green and blue sRGB space, so that an intermediate image may be obtained, specifically, the converting the RAW image to be processed into the standard red, green and blue sRGB space may include: the RAW image to be processed is sequentially subjected to linear processing operation, white balance correction operation, demosaicing operation, color space conversion processing, brightness correction, gamma correction and the like, so that an intermediate image can be obtained, wherein the intermediate image is an image corresponding to the sRGB space.
In addition, the data size of the intermediate image is not limited in this embodiment, and those skilled in the art may configure the intermediate image according to a specific application scenario or application scenario, for example: the size of the data amount of the intermediate image may be 8bit, 16bit or 32bit, and it should be noted that the larger the data amount of the intermediate image is, the larger the gamut space for adjusting the intermediate image is, and the smaller the data amount of the intermediate image is, the smaller the gamut space for adjusting the intermediate image is.
Step S302: and adjusting the intermediate image based on the adjusting parameters to obtain the target image.
After the intermediate image is acquired, the intermediate image can be adjusted by using the adjustment parameter, so that a target image can be acquired, specifically, when the adjustment parameter includes an exposure parameter, the exposure degree of the intermediate image can be adjusted by using the exposure parameter, so that the target image can be acquired; when the adjustment parameters include color parameters, the color of the intermediate image can be adjusted by using the color parameters, and a target image can be obtained; when the adjustment parameters include color parameters and exposure parameters, the color and exposure of the intermediate image may be analyzed by using the color parameters and the exposure parameters to obtain a target image.
In addition, the data size corresponding to the target image is not limited in this embodiment, and in order to reduce the storage space required by the target image and ensure the quality and efficiency of image display, the data size of the generated intermediate image is greater than or equal to the data size of the target image. For example, the data amount of the target image may be 16 bits, and the data amount of the intermediate image may be 16 bits, in which case the data amount of the target image is the same as the data amount of the intermediate image. Alternatively, the data amount of the target image may be 8 bits, and the data amount of the intermediate image may be 16 bits, in which case the data amount of the target image is smaller than the data amount of the intermediate image.
In the embodiment, the RAW image to be processed is converted into the standard red, green and blue sRGB space to obtain the intermediate image, and then the intermediate image is adjusted based on the adjusting parameters, so that the target image which can convert the RGB image to be processed into the sRGB space is effectively realized, and the quality and the effect of image processing are further ensured.
FIG. 4 is a flowchart illustrating another image processing method according to an embodiment of the present invention; referring to fig. 4, in order to further improve the quality and effect of image processing, if the obtained target image is any one image in an image set, fine adjustment may be performed on the target image based on the processing effect of other images in the image set, so that the quality and effect of image processing in the image set may be kept consistent, and therefore, after the target image is obtained, the method in this embodiment may include:
step S401: and acquiring an image set corresponding to the RAW image to be processed.
After the RAW map to be processed is acquired, the RAW map to be processed may be analyzed to acquire an image set corresponding to the RAW map to be processed. In some examples, obtaining a set of images corresponding to a RAW map to be processed may include: acquiring an identity of a RAW image to be processed, and determining an image set associated with the identity as an image set corresponding to the RAW image to be processed, where it is noted that the image set corresponding to the RAW image to be processed includes the RAW image to be processed.
In addition, before acquiring the image set corresponding to the RAW map to be processed, in order to improve the quality and effect of performing analysis processing on the plurality of RAW maps, the method in this embodiment may include a partition generation operation of the image set, and specifically, before acquiring the image set corresponding to the RAW map to be processed, the method in this embodiment may further include: acquiring a plurality of RAW maps, wherein the RAW map to be processed is one of the plurality of RAW maps; grouping the plurality of RAW maps by using a deep learning model to obtain at least one image set, wherein each image set comprises at least two RAW maps.
Specifically, a deep learning model is trained in advance, the deep learning model is used for dividing the RAW images with high similarity into one image set, when the image to be subjected to the image processing operation includes a plurality of RAW images, a plurality of RAW images to be subjected to the processing operation can be acquired, while the RAW image to be processed in the above embodiment may be one of the plurality of RAW images, after the plurality of RAW images are acquired, the plurality of RAW images can be input into the deep learning model, so that at least one image set can be acquired, thus effectively realizing the grouping operation of the plurality of RAW images by using the deep learning model, wherein each image set may include at least two similar RAW images.
In other examples, before acquiring the image set corresponding to the RAW map to be processed, the method for acquiring an image set in this embodiment may further include: acquiring a plurality of RAW maps, wherein the RAW map to be processed is one of the plurality of RAW maps; determining image characteristics corresponding to a plurality of RAW images; the method comprises the steps of grouping a plurality of RAW maps based on image features corresponding to the RAW maps respectively to obtain at least one image set, wherein each image set comprises at least two RAW maps.
When the image needing to be subjected to the image processing operation includes a plurality of RAW maps, the plurality of RAW maps needing to be subjected to the processing operation may be acquired, and the RAW map to be processed in the above embodiment may be one of the plurality of RAW maps. In this embodiment, a specific implementation manner of obtaining multiple RAW maps is not limited, and in some examples, the multiple RAW maps may be stored in a preset area or preset equipment, and the multiple RAW maps may be obtained by accessing the preset area or the preset equipment. In other examples, the plurality of RAW maps may be generated by the image acquisition device, and after the image acquisition device generates the plurality of RAW maps, the plurality of RAW maps may be transmitted to the image processing device so that the image processing device may acquire the plurality of RAW maps.
After obtaining the multiple RAW maps, feature extraction may be performed on the multiple RAW maps, so that image features corresponding to the multiple RAW maps may be determined, and then, the image features corresponding to the multiple RAW maps may be analyzed and processed, so as to group the multiple RAW maps and obtain at least one image set, where each image set includes at least two RAW maps. Specifically, grouping the RAW maps based on original image features corresponding to the RAW maps, and obtaining at least one image set may include: determining the similarity between any two RAW images based on the image characteristics corresponding to the RAW images; and when the similarity is greater than or equal to a preset threshold, dividing two RAW graphs corresponding to the similarity into a group to obtain at least one image set.
After the image features corresponding to the plurality of RAW maps are acquired, the similarity between any two RAW maps can be determined based on the image features corresponding to the plurality of RAW maps, and the similarity can be determined based on any one of the euclidean distance, the cosine distance and the hamming distance between the image features corresponding to any two RAW maps. After the similarity is obtained, the similarity can be analyzed and compared with a preset threshold, when the similarity is smaller than the preset threshold, the similarity between the two RAW images is low, and at this time, the two RAW images with low similarity need to be divided into two image sets; when the similarity is greater than or equal to the preset threshold, it indicates that the similarity between the two RAW maps is high, and at this time, the two RAW maps corresponding to the similarity may be divided into one group to obtain one image set, so that the image set division operation is realized, and further at least one image set may be obtained, where the image set may include at least two RAW maps.
Step S402: and determining a reference image corresponding to the image set.
After the image set is acquired, a reference map corresponding to the image set may be determined, where the reference map is used to perform a consistency adjustment operation on target images corresponding to other RAW maps in the image set. Specifically, determining the reference map corresponding to the image set may include: acquiring a target map corresponding to any one RAW map (for example, a first RAW map, a middle RAW map or a last RAW map, etc.) in an image set, wherein the image set comprises at least one RAW map; the target map is determined as a reference map corresponding to the image set.
Specifically, one image set may include a plurality of RAW maps that need to be processed, and in order to enable image processing quality and effect corresponding to one image set to be consistent, after the image set is acquired, a target map corresponding to any one RAW map in the image set may be determined, where the target map is obtained by performing a conversion operation on the RAW maps, and then the target map may be determined as a reference map corresponding to the image set.
For example, the image set includes RAW fig. 1, RAW fig. 2, and RAW fig. 3, and after processing the RAW maps in the image set, a target map 1 corresponding to the RAW map 1, a target map 2 corresponding to the RAW map 2, and a target map 3 corresponding to the RAW map 3 may be obtained, that is, 3 target maps may correspond in the image set, and then any one of the 3 target maps may be determined as a reference map, for example: the target map 2 can be determined as the reference map, thereby effectively ensuring the accuracy and reliability of the determination of the reference map.
In other examples, determining the reference map corresponding to the image set may include: acquiring a target image corresponding to a first RAW image in an image set, wherein the image set comprises at least one RAW image; the target map is determined as a reference map corresponding to the image set.
For example, the image set may include a RAW map 1, a RAW map 2, and a RAW map 3, and after processing the RAW maps in the image set, a target map 1 corresponding to the RAW map 1, a target map 2 corresponding to the RAW map 2, and a target map 3 corresponding to the RAW map 3 may be obtained, that is, 3 target maps may correspond to the image set, and then the target map 1 corresponding to the first RAW map 1 may be determined as a reference map, thereby effectively ensuring the accuracy and reliability of determining the reference map.
Step S403: and adjusting the target image based on the reference image to obtain a processed image, wherein the parameters corresponding to the processed image are the same as the parameters corresponding to the reference image.
After the reference map is acquired, the target image may be adjusted based on the reference map, and the adjustment operation on the target image is a fine adjustment operation on the target image, so that a processed image may be obtained. Specifically, the embodiment does not limit a specific implementation manner of adjusting the target image based on the reference map, and in some examples, a machine learning model for implementing an image processing operation is configured in advance, and after the reference map and the target image are acquired, the reference map and the target map may be input into the machine learning model, so that a processed image adjusted based on the reference map may be acquired.
In other examples, adjusting the target image based on the reference map, obtaining the processed image may include: acquiring a reference image parameter corresponding to the reference image and an image parameter corresponding to the target image, determining an adjusting parameter based on the reference image parameter and the image parameter, and adjusting the target image based on the adjusting parameter, so that the processed image can be acquired.
Specifically, after the reference image and the target image are obtained, the reference image and the target image may be analyzed and processed respectively, so that a reference image parameter corresponding to the reference image and an image parameter corresponding to the target image may be obtained, where the reference image parameter and the image parameter may include at least one of the following: the color parameter, the exposure parameter, the color temperature parameter, and the like, after the reference image parameter and the image parameter are acquired, a parameter deviation between the reference image parameter and the image parameter may be determined as an adjustment parameter, and the adjustment parameter may include at least one of the following: a color adjustment parameter, an exposure adjustment parameter, a color temperature adjustment parameter, and the like. It should be noted that the adjustment parameter may be a positive value, a negative value, or 0, and when the adjustment parameter is a positive value, it indicates that the parameter of the target image needs to be increased, when the adjustment parameter is a negative value, it indicates that the parameter of the target image needs to be decreased, and when the adjustment parameter is 0, it indicates that the parameter of the target image needs to be kept unchanged. After the adjustment parameters are acquired, the target image may be adjusted based on the adjustment parameters, so that a processed image may be obtained.
In the embodiment, the image set corresponding to the RAW image to be processed is obtained, the reference image corresponding to the image set is determined, and then the target image is adjusted based on the reference image to obtain the processed image, so that the consistent image processing quality and effect of a plurality of images in the same image set can be effectively realized, and the quality and effect of image processing are further improved.
In specific application, referring to fig. 5, in order to better enable a retoucher and improve work efficiency, and enable the retoucher to have more energy to be put into a process of performing personalized art designing on a gearshift result, which needs innovativeness and professionality, the embodiment of the application can combine a plurality of deep learning technologies, and provides an automatic RAW picture transposition method for RAW transposition in professional photography retouching.
The executing subject of the automatic shift method of the RAW map may be an automatic shift device of the RAW map, and the automatic shift device may include: the system comprises a thumbnail extraction module, a RAW image analysis module, a scene classification module, an image grouping module, a color matching module and a consistency post-processing module. The image grouping module, the scene classification module, the color matching module and the consistency post-processing module all use a deep learning module, and the module can use different bottom layer frameworks (backbones), for example: residual networks (ResNet) 50, ResNet100, deep learning networks, etc., and various structures may be used in the networks, for example: the network structure obtained by adjusting the number of layers of the convolution module, the number of channels, and the like is not limited to a specific structure. Specifically, the automatic shift method of the RAW map may include the steps of:
step 1: the thumbnail extraction module obtains an original RAW image to be processed, and extracts a thumbnail (JPG) of the original RAW image based on the thumbnail extraction module, where the thumbnail may be a thumbnail generated by an image acquisition device (a camera, a video camera, a device capable of realizing image acquisition operation, and the like) so as to facilitate a user to view or preview an original effect of the original RAW image.
The number of the RAW maps may be one or more, when the number of the RAW maps is multiple, the RAW maps may be a plurality of RAW maps to be processed uploaded by a user (CR2, NEF, ARW, and other formats), the output may be a target image after gear shifting, and the target image may be an 8-bit JPG map. For example, the original RAW map may be an image of 256 × 256 pixels, and the thumbnail corresponding to the original RAW map may be obtained by inputting the original RAW map to the thumbnail extracting module, where the thumbnail may be an image of 128 × 128 pixels or an image of 64 × 64 pixels.
Step 2: when the RAW maps to be processed are multiple, the thumbnail images corresponding to the multiple RAW maps may be obtained, and then the image grouping module may perform grouping processing on the thumbnail images corresponding to the multiple RAW maps, so that multiple image sets may be obtained, where each image set may include multiple RAW maps.
Specifically, for the image grouping module, the input of the module may be a plurality of serialized thumbnails (in the order of shooting time), and the output may be a grouping result corresponding to the plurality of thumbnails, each image belongs to a unique image set, each image set may correspond to a unique reference image as a reference of a color exposure effect, and the reference image corresponding to each image set may be any one included in each image set.
When the image grouping module is used for grouping a plurality of thumbnails, the specific implementation principle can be as follows: the method comprises the steps of using a deep neural network (CNN) as a feature extractor, wherein the feature extractor can perform training learning by adopting a metric learning mode on large-batch grouped data, extracting a unique depth feature for each RAW image, wherein the feature is used for representing scene information to which the RAW image belongs, then comparing feature similarity between adjacent images according to a sequence order, and when the similarity is lower than a grouping threshold, cutting the images into a next group to finally obtain a plurality of grouped results, namely obtaining an image set. In some examples, the granularity of grouping the plurality of thumbnails may be adjusted, and different grouping results may be obtained according to different grouping granularities, for example: when the grouping granularity is a first granularity, the grouping result obtained based on the first granularity may include: a set of distant images, a set of close-up images, etc.; when the grouping granularity is a second granularity, the grouping result obtained based on the second granularity may include: the image processing method includes the steps of firstly, collecting a first distant view image, a second distant view image, a first close-up image, a second close-up image and the like, wherein a user can adjust or set the granularity of grouping according to application requirements or design requirements.
And step 3: after the thumbnail corresponding to the RAW map is obtained, the scene classification module may be used to perform scene classification processing on the thumbnail corresponding to the RAW map, so as to obtain the scene selection information corresponding to the RAW map.
The scene classification module is used for classifying the RAW image according to color temperatures (cold color system scene, warm color system scene, other scenes and the like) so as to better express the shooting intention of a photographer. Because the adjustment modes of different color system images are obviously distinguished, different color matching parameters can be adopted according to the classification of color temperatures in order to obtain more fine and rich color effects. Specifically, for the scene classification module, the input of the module is a single thumbnail, and the output is the only scene information to which the thumbnail belongs, and when the scene classification module is used to determine the scene information to which the thumbnail belongs, the specific implementation principle may be as follows: the method comprises the steps of using a deep neural network (CNN) as a feature extractor of a color temperature scene, wherein the deep neural network can be obtained by learning and training input graphs and training data pairs corresponding to labels, extracting unique deep features for each graph, and then mapping different deep features to different unique color temperature classification labels by using the deep neural network.
And 4, step 4: after the original RAW image is acquired, the RAW format image is converted into sRGB space by using a RAW image analysis module, so that an intermediate image can be obtained, wherein the intermediate image may be a 16-bit RGB format image.
Specifically, after the original RAW map, an image set to which the original RAW map belongs may be determined, then, packet reference map information corresponding to the original RAW map is determined based on the image set, and then, the packet reference map information and the scene selection information are sent to the RAW map analysis module, so as to perform a color temperature adjustment operation on the intermediate image based on the scene selection information, and perform a consistency adjustment operation on the image based on the packet reference map information.
In addition, when the RAW format image is converted into the sRGB space by the RAW image analysis module, a series of processing flows of: linear processing operation, white balance correction operation, demosaicing operation, color space conversion operation, brightness correction, gamma correction, and the like.
And 5: after the intermediate image is acquired, the color and exposure adjustment operation can be performed on the intermediate image by using the color matching module and the scene selection information, so that an adjusted image can be acquired.
The color matching module is used for adjusting the color and the exposure of the 16-bit intermediate image analyzed by the RAW image, for the color matching module, the input of the module can be the 16-bit intermediate image analyzed by the RAW image, and the output of the module is the 16-bit adjusted image after the color and the exposure processing. Specifically, the image processing principle of the color matching module may be as follows: the adjustment operation of color and exposure is realized by using a 3DLUT and a deep learning model (CNN), wherein the deep learning model (CNN) can be obtained by learning and training an input image and an output image after manual adjustment, so that the deep learning model can automatically perform color matching processing on the input image. In order to meet the adjustment requirements of different color temperature scenes, the image data of each scene can be collected respectively, a color mixing model is trained independently, then color temperature labels obtained by classifying input images can be combined, and adjustment processing is carried out through the color mixing model corresponding to the color temperature labels, so that the accuracy and reliability of determining the adjusted images are effectively guaranteed.
It should be noted that, in the deep learning model training, different loss functions may be selected from the loss functions used, for example, the error information MAE, the mean square error function MSE, and the like, as long as a deep learning model meeting the preset requirement is obtained.
Step 6: and fine adjustment is carried out on the adjusted image through the consistency post-processing module and the grouped reference image information, so that a target image can be obtained.
During the actual later image processing operation, the manual RAW shift operation may perform individual adjustment for individual inconsistent images after adjusting a group of images in batch, that is, the color and exposure information of one image is migrated to another image, so as to ensure that the group of images are consistent in color and exposure. In order to achieve the above operation, the consistency post-processing module is configured to correct a problem that the single image may not be uniform with other images in the group after the single image is automatically toned, at this time, the input of the consistency post-processing module may be a toned adjusted image of 16 bits and a unique grouping reference image of an image set corresponding to the adjusted image, and the output of the consistency post-processing module is a target image corresponding to the RAW image, where the target image may be 8-bit data.
The method for transferring the RAW image provided by the embodiment of the application can replace manual transfer, greatly improve the efficiency of later-stage image repair, maximally express the image information stored in the RAW image, and obtain a transfer result with great superiority. In the specific implementation, the scheme can be coupled with a plurality of depth learning modules, the color exposure adjustment can be automatically carried out on the images through the plurality of depth learning modules, different adjustment references do not need to be set manually to adapt to the adjustment of different images, specifically, the RAW format images are converted into the images of an sRGB space through the RAW image analysis module, in addition, in order to meet the difference requirements of customers on color temperature effects (a warm color system, a cold color system and the like), the problem that the color exposure adjustment requirements of different scenes and the adjustment inconsistency possibly existing in a group of images in the manual shift process are solved, the input batch images can be automatically grouped through the image grouping module by imitating the mode of grouping the images of the shooting scenes in the manual shift process, so that the images with individual deviation in the group can be corrected, and all the images in the same group keep the color and exposure effects consistent with the group of reference images, thereby ensuring the uniformity of the effect of a group of pictures; meanwhile, the images can be distributed into different color mixing models, colors and exposures of different scenes can be automatically and accurately adjusted through the color mixing modules, the requirements of color effects are combined, the multiple color mixing modules are adaptively expanded to be respectively adjusted, and finally automatic RAW shift operation of self-adaptive scenes can be realized.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention; referring to fig. 6, the present embodiment provides an image processing apparatus for executing the image processing method shown in fig. 2, and specifically, the image processing apparatus may include:
a first obtaining module 11, configured to obtain a RAW map to be processed;
a first determining module 12, configured to determine scene information corresponding to a RAW map to be processed;
a first determining module 12, configured to determine, based on the scene information, an adjustment parameter corresponding to the RAW map to be processed;
and the first processing module 13 is configured to process the RAW image to be processed based on the adjustment parameter, so as to obtain a target image, where a format of the target image is different from a format of the RAW image to be processed.
In some examples, when the first determining module 12 determines the scene information corresponding to the RAW map to be processed, the first determining module 12 is configured to perform: acquiring a thumbnail corresponding to a RAW image to be processed; determining an image feature corresponding to the thumbnail; based on the image features, scene information is determined.
In some examples, when the first determining module 12 determines the scene information corresponding to the RAW map to be processed, the first determining module 12 is configured to perform: acquiring a thumbnail corresponding to a RAW image to be processed and a deep learning model, wherein the deep learning model is trained to be used for determining color temperature information of the image; and inputting the thumbnail into the deep learning model to obtain scene information corresponding to the RAW image to be processed.
In some examples, when the first processing module 13 processes the RAW map to be processed based on the adjustment parameter to obtain the target image, the first processing module 13 is configured to perform: converting the RAW image to be processed into a standard red, green and blue sRGB space to obtain an intermediate image; and adjusting the intermediate image based on the adjusting parameters to obtain the target image.
In some examples, the amount of data of the intermediate image is greater than or equal to the amount of data of the target image.
In some examples, after obtaining the target image, the first obtaining module 11, the first determining module 12 and the first processing module 13 in the present embodiment are configured to perform the following steps:
a first obtaining module 11, configured to obtain an image set corresponding to a RAW image to be processed;
a first determining module 12, configured to determine a reference map corresponding to the image set;
and the first processing module 13 is configured to adjust the target image based on the reference map to obtain a processed image, where parameters corresponding to the processed image are the same as parameters corresponding to the reference map.
In some examples, when the first determining module 12 determines the reference map corresponding to the image set, the first determining module 12 is configured to perform: acquiring a target image corresponding to any one RAW image in an image set, wherein the image set comprises at least one RAW image; the target map is determined as a reference map corresponding to the image set.
In some examples, before acquiring the set of images corresponding to the RAW map to be processed, the first acquisition module 11 and the first processing module 13 in this embodiment are configured to perform the following steps:
the first obtaining module 11 is configured to obtain a plurality of RAW maps, where a RAW map to be processed is one of the plurality of RAW maps;
the first processing module 13 is configured to group the multiple RAW maps by using a deep learning model to obtain at least one image set, where each image set includes at least two RAW maps.
In some examples, before acquiring the image set corresponding to the RAW map to be processed, the first acquiring module 11, the first determining module 12 and the first processing module 13 in the present embodiment are configured to perform the following steps:
the first obtaining module 11 is configured to obtain a plurality of RAW maps, where a RAW map to be processed is one of the plurality of RAW maps;
a first determining module 12, configured to determine an image feature corresponding to each of the plurality of RAW maps;
the first processing module 13 is configured to group the multiple RAW maps based on image features corresponding to the multiple RAW maps, so as to obtain at least one image set, where each image set includes at least two RAW maps.
In some examples, when the first processing module 13 groups the plurality of RAW maps based on the original image features corresponding to each of the plurality of RAW maps to obtain at least one image set, the first processing module 13 is configured to perform: determining the similarity between any two RAW images based on the image characteristics corresponding to the RAW images; and when the similarity is greater than or equal to a preset threshold, dividing two RAW graphs corresponding to the similarity into a group to obtain at least one image set.
The apparatus shown in fig. 6 can perform the method of the embodiment shown in fig. 1-5, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-5. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 5, and are not described herein again.
In one possible design, the structure of the image processing apparatus shown in fig. 6 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, an electronic device, a server, or other devices. As shown in fig. 7, the electronic device may include: a first processor 21 and a first memory 22. Wherein the first memory 22 is used for storing a program for executing the image processing method in the embodiment shown in fig. 1-5, and the first processor 21 is configured for executing the program stored in the first memory 22.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the first processor 21, are capable of performing the steps of:
acquiring a RAW image to be processed;
determining scene information corresponding to a RAW image to be processed;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
and processing the RAW image to be processed based on the adjusting parameters to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
Further, the first processor 21 is also used to execute all or part of the steps in the embodiments shown in fig. 1-5.
The electronic device may further include a first communication interface 23 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the image processing method in the method embodiments shown in fig. 1 to 5.
Furthermore, an embodiment of the present invention provides a computer program product, including: computer program, which, when executed by a processor of an electronic device, causes the processor to carry out the steps of the image processing method as described above with reference to fig. 1-5.
FIG. 8 is a flowchart illustrating another image processing method according to an embodiment of the present invention; referring to fig. 8, the embodiment provides another image processing method, an executing subject of the method may be an image processing apparatus, the image processing apparatus may be implemented as software, or a combination of software and hardware, specifically, the image processing apparatus may be implemented as a wearable device, that is, the image processing method may be applied to the wearable device, and the wearable device may be: in particular, the image processing method may include the following steps of:
step S801: and acquiring a RAW image to be processed, wherein the RAW image to be processed comprises scene information of the wearable device.
The wearable device can display an interactive interface for interactive operation with a user when the user uses the wearable device, and the user performs a mode selection operation through the interactive interface, namely, controls the wearable device to operate in the first mode or the second mode.
When the wearable device operates in the second mode, the context information where the wearable device is located may be obtained, and since the context information corresponding to the wearable device may be different when the user uses the wearable device in different contexts, for example: daytime indoor scene, daytime outdoor scene, night indoor scene, night outdoor scene etc. and when the wearable equipment that utilizes to be in different scene information shows the image, the image can correspond there is different display effect, consequently, in order to guarantee image display's quality and efficiency, can realize the uniformity to the image display effect in the different scenes simultaneously, can acquire the scene information that wearable equipment corresponds, with the image that needs show is carried out finely tuning in wearable equipment based on the scene information that wearable equipment corresponds, and then guarantee image display's quality and efficiency.
In addition, the embodiment does not limit a specific obtaining manner of the scene information corresponding to the wearable device, specifically, an image acquisition device and an environment sensor may be configured on the wearable device, the scene information corresponding to the wearable device may be obtained through the image acquisition device, and the environment sensor may include at least one of the following: the wearable device comprises a temperature sensor, a humidity sensor, a light sensor and the like, wherein the environment sensor can acquire environment information corresponding to the wearable device, and the environment information may include at least one of the following: temperature information, humidity information, light information, and the like.
Step S803: and determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information.
Because the RAW maps generated in different scenes may correspond to different adjustment parameters, and different environment information of the wearable device may correspond to different adjustment parameters, when performing an image processing operation on the RAW map to be processed, a plurality of adjustment parameters for processing the RAW map to be processed in different scene information in different environment information are pre-configured, and the adjustment parameters may include at least one of the following: a color parameter, an exposure parameter, a contrast parameter, etc.
The implementation manner and the implementation effect of determining the adjustment parameter corresponding to the RAW image to be processed based on the scene information in this embodiment are similar to the specific implementation manner and the implementation effect of step S203 in the foregoing embodiment, and the foregoing statements may be specifically referred to, and are not described herein again.
Step S804: and processing the RAW image to be processed based on the adjusting parameters to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
The implementation manner and the implementation effect of the target image obtained by processing the RAW image to be processed based on the adjustment parameter in this embodiment are similar to the specific implementation manner and the implementation effect of step S204 in the above embodiment, and the above statements may be specifically referred to, and are not described again here.
Step S805: and displaying the target image through the wearable device.
After the target image is acquired, the target device can be displayed through the wearable device, so that a user can directly view the target image subjected to image processing through the wearable device, and the practicability of the method is further improved.
It should be noted that the method in this embodiment may also include the method in the embodiment shown in fig. 1 to 5, and for the part not described in detail in this embodiment, reference may be made to the relevant description of the embodiment shown in fig. 1 to 5. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 5, and are not described herein again.
For example, when the user uses VR glasses to play a game, and when the operation mode of the VR glasses is the second mode, the camera on the VR glasses may be used to obtain the scene information where the VR glasses are located, and the scene information may include: indoor scenes, street scenes, plaza scenes, etc.; after the VR glasses acquire the RAW image to be processed including the scene where the VR glasses are located, the adjustment parameters corresponding to the RAW image to be processed may be determined based on the scene information corresponding to the RAW image to be processed, after the adjustment parameters are acquired, the RAW image to be processed may be converted into a target image based on the adjustment parameters, the target image may be a JPG image, and then a game scene corresponding to the real scene information where the VR glasses are located may be displayed through the VR glasses, for example: when a user uses the VR glasses to be in a scene of a room, through the image processing operation, a game room scene corresponding to the real room scene can be displayed in the VR glasses, and the user can perform some interactive operations in the game room scene, so that the experience of the user playing games through the VR glasses can be improved, and the practicability of the method is further improved.
In the image processing method provided by this embodiment, the RAW image to be processed is obtained, then the adjustment parameter corresponding to the RAW image to be processed is determined based on the scene information of the wearable device included in the RAW image to be processed, and the RAW image to be processed is processed based on the adjustment parameter to obtain the target image, after the target image is obtained, the target image can be displayed through the wearable device, so that the RAW image can be automatically converted into the target image in the preset format without manual operation, thereby improving the quality and efficiency of image processing, and facilitating improvement of the delivery efficiency of post-repair images, in addition, because different scene information and different environment information can correspond to different adjustment parameters, processing operations on the RAW images in different scenes based on different adjustment parameters can be realized, so that not only the quality and efficiency of target image generation are effectively ensured, the method can meet the actual requirements of different users, and improves the expansibility and controllability of the method, thereby further improving the practicability of the method.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention; referring to fig. 9, the present embodiment provides an image processing apparatus for executing the image processing method shown in fig. 8, and specifically, the image processing apparatus may include:
the second obtaining module 31 is configured to obtain a RAW image to be processed, where the RAW image to be processed includes scene information where the wearable device is located;
a second determining module 32, configured to determine, based on the scene information, an adjustment parameter corresponding to the RAW map to be processed;
the second processing module 33 is configured to process the RAW image to be processed based on the adjustment parameter, so as to obtain a target image, where a format of the target image is different from a format of the RAW image to be processed;
and a second display module 34, configured to display the target image through the wearable device.
The apparatus shown in fig. 9 can execute the method of the embodiment shown in fig. 8, and reference may be made to the related description of the embodiment shown in fig. 8 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 8, and are not described herein again.
In one possible design, the structure of the image processing apparatus shown in fig. 9 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, an electronic device, a server, or other devices. As shown in fig. 10, the electronic device may include: a second processor 41 and a second memory 42. Wherein the second memory 42 is used for storing a program for executing the image processing method in the embodiment shown in fig. 8, and the second processor 41 is configured for executing the program stored in the second memory 42.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the second processor 41, are capable of performing the steps of:
acquiring a RAW image to be processed, wherein the RAW image to be processed comprises scene information of wearable equipment;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
processing the RAW image to be processed based on the adjusting parameters to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed;
and displaying the target image through the wearable device.
Further, the second processor 41 is also used to execute all or part of the steps in the embodiment shown in fig. 8.
The electronic device may further include a second communication interface 43 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the image processing method in the method embodiment shown in fig. 8.
Furthermore, an embodiment of the present invention provides a computer program product, including: a computer program which, when executed by a processor of an electronic device, causes the processor to perform the steps of the image processing method shown in fig. 8 described above.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. An image processing method, comprising:
acquiring a RAW image to be processed;
determining scene information corresponding to the RAW map to be processed;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
and processing the RAW image to be processed based on the adjusting parameter to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed.
2. The method of claim 1, wherein determining scene information corresponding to the RAW map to be processed comprises:
acquiring a thumbnail corresponding to the RAW image to be processed;
determining an image feature corresponding to the thumbnail;
determining the scene information based on the image feature.
3. The method of claim 1, wherein determining scene information corresponding to the RAW map to be processed comprises:
acquiring a thumbnail corresponding to the RAW image to be processed and a deep learning model, wherein the deep learning model is trained to be used for determining color temperature information of an image;
and inputting the thumbnail into the deep learning model to obtain scene information corresponding to the RAW image to be processed.
4. The method according to claim 1, wherein processing the RAW map to be processed based on the adjustment parameter to obtain a target image comprises:
converting the RAW image to be processed into a standard red, green and blue sRGB space to obtain an intermediate image;
and adjusting the intermediate image based on the adjusting parameter to obtain the target image.
5. The method of claim 4, wherein the amount of data of the intermediate image is greater than or equal to the amount of data of the target image.
6. The method of claim 1, wherein after obtaining the target image, the method further comprises:
acquiring an image set corresponding to the RAW image to be processed;
determining a reference image corresponding to the image set;
and adjusting the target image based on the reference image to obtain a processed image, wherein the parameters corresponding to the processed image are the same as the parameters corresponding to the reference image.
7. The method of claim 6, wherein determining the reference map corresponding to the image set comprises:
acquiring a target image corresponding to any RAW image in the image set, wherein the image set comprises at least one RAW image;
determining the target map as a reference map corresponding to the image set.
8. The method of claim 6, wherein prior to acquiring the set of images corresponding to the RAW map to be processed, the method further comprises:
acquiring a plurality of RAW images, wherein the RAW image to be processed is one of the plurality of RAW images;
grouping the plurality of RAW maps by using a deep learning model to obtain at least one image set, wherein each image set comprises at least two RAW maps.
9. The method of claim 6, wherein prior to acquiring the set of images corresponding to the RAW map to be processed, the method further comprises:
acquiring a plurality of RAW images, wherein the RAW image to be processed is one of the plurality of RAW images;
determining image characteristics corresponding to a plurality of RAW images;
the method comprises the steps of grouping a plurality of RAW maps based on image features corresponding to the RAW maps respectively to obtain at least one image set, wherein each image set comprises at least two RAW maps.
10. The method of claim 9, wherein grouping the plurality of RAW maps based on original image features corresponding to the plurality of RAW maps to obtain at least one image set comprises:
determining the similarity between any two RAW images based on the image characteristics corresponding to the RAW images;
and when the similarity is greater than or equal to a preset threshold, dividing two RAW graphs corresponding to the similarity into a group to obtain at least one image set.
11. An electronic device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of any of claims 1-10.
12. A computer storage medium storing a computer program which causes a computer to implement the image processing method according to any one of claims 1 to 10 when executed.
13. An image processing method, comprising:
acquiring a RAW image to be processed, wherein the RAW image to be processed comprises scene information of wearable equipment;
determining an adjusting parameter corresponding to the RAW map to be processed based on the scene information;
processing the RAW image to be processed based on the adjusting parameter to obtain a target image, wherein the format of the target image is different from that of the RAW image to be processed;
displaying the target image through the wearable device.
14. An electronic device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of claim 13.
CN202111580672.0A 2021-12-22 2021-12-22 Image processing method, apparatus and computer storage medium Pending CN114266694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111580672.0A CN114266694A (en) 2021-12-22 2021-12-22 Image processing method, apparatus and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111580672.0A CN114266694A (en) 2021-12-22 2021-12-22 Image processing method, apparatus and computer storage medium

Publications (1)

Publication Number Publication Date
CN114266694A true CN114266694A (en) 2022-04-01

Family

ID=80828788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111580672.0A Pending CN114266694A (en) 2021-12-22 2021-12-22 Image processing method, apparatus and computer storage medium

Country Status (1)

Country Link
CN (1) CN114266694A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116347217A (en) * 2022-12-26 2023-06-27 荣耀终端有限公司 Image processing method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116347217A (en) * 2022-12-26 2023-06-27 荣耀终端有限公司 Image processing method, device and storage medium

Similar Documents

Publication Publication Date Title
US20210350136A1 (en) Method, apparatus, device, and storage medium for determining implantation location of recommendation information
KR100658998B1 (en) Image processing apparatus, image processing method and computer readable medium which records program thereof
CN107820020A (en) Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters
CN109862389B (en) Video processing method, device, server and storage medium
CN111710049B (en) Method and device for determining ambient illumination in AR scene
CN115242992B (en) Video processing method, device, electronic equipment and storage medium
CN101527860A (en) White balance control apparatus, control method therefor, and image sensing apparatus
US9692963B2 (en) Method and electronic apparatus for sharing photographing setting values, and sharing system
CN101621628A (en) Photographic apparatus, setting method of photography conditions, and recording medium
WO2015167975A1 (en) Rating photos for tasks based on content and adjacent signals
CN103763475A (en) Photographing method and device
CN103440674A (en) Method for rapidly generating crayon special effect of digital image
CN111353965B (en) Image restoration method, device, terminal and storage medium
CN115082328A (en) Method and apparatus for image correction
CN106165409A (en) Image processing apparatus, camera head, image processing method and program
CN107424117A (en) Image U.S. face method, apparatus, computer-readable recording medium and computer equipment
CN114266694A (en) Image processing method, apparatus and computer storage medium
JP5149858B2 (en) Color image representative color determination device and operation control method thereof
CN113724175A (en) Image processing method and device based on artificial intelligence and electronic equipment
CN114257730A (en) Image data processing method and device, storage medium and computer equipment
CN108132935B (en) Image classification method and image display method
JP5381498B2 (en) Image processing apparatus, image processing program, and image processing method
CN111800568A (en) Light supplement method and device
CN111986309B (en) System and method for generating special film Pre-vis based on three-dimensional scanning
CN111435986B (en) Method for acquiring source image database, training device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination