CN113194245A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113194245A
CN113194245A CN202110320688.1A CN202110320688A CN113194245A CN 113194245 A CN113194245 A CN 113194245A CN 202110320688 A CN202110320688 A CN 202110320688A CN 113194245 A CN113194245 A CN 113194245A
Authority
CN
China
Prior art keywords
image
target
target area
image data
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110320688.1A
Other languages
Chinese (zh)
Inventor
邵寒月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wingtech Electronic Technology Co Ltd
Original Assignee
Shanghai Wingtech Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wingtech Electronic Technology Co Ltd filed Critical Shanghai Wingtech Electronic Technology Co Ltd
Priority to CN202110320688.1A priority Critical patent/CN113194245A/en
Publication of CN113194245A publication Critical patent/CN113194245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Abstract

The present disclosure relates to an image processing method, apparatus, device, and storage medium. The method comprises the following steps: acquiring an image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy; processing the image data of each pixel in the target area to obtain the target image data processed by the target area; and generating a target processing image according to the target image data processed by the target area and the image data in the non-target area. By the scheme, the image to be processed does not need to be input to a computer end and processed through image processing software, so that the image processing flow is simplified, the image to be processed is personalized, and the image processing effect is improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
In recent years, cameras and image processing technologies have been rapidly developed, and camera modules have become important modules in smart devices. The camera and more third-party application programs in the intelligent device integrate various image processing algorithms to process the shot pictures.
At present, when the intelligent device processes the shot picture, a series of themes are provided for the picture to be processed, different themes correspond to different image styles, and different presentation effects are created for the picture by adjusting the different styles of the picture. However, the image adjustment method can only process the whole picture, and the processing method is not flexible enough.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present disclosure provides an image processing method, an image processing apparatus, an image processing device, and a storage device, so as to implement personalized processing on a partial image region of a picture and improve an image processing effect.
The present disclosure provides an image processing method, including:
acquiring an image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy;
processing the image data of each pixel in the target area to obtain target image data processed by the target area;
and generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
The present disclosure provides an image processing apparatus, including:
the image acquisition module is used for acquiring an image to be processed;
the target area identification module is used for identifying a target area in the image to be processed based on a preset identification strategy;
the image processing module is used for processing the image data of each pixel in the target area to obtain the target image data processed by the target area;
and the target processing image generation module is used for generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
An embodiment of the present invention further provides an image processing apparatus, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image processing method provided by any of the embodiments of the present invention.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image processing method provided in any embodiment of the present invention.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
identifying at least one target area for the acquired data to be processed, performing image processing only on each pixel in the target area, and generating a target processing image based on the target image data processed by the target area and the image data in the non-target area. The method simplifies the image processing flow, realizes the personalized processing of the image to be processed and improves the image processing effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a basic architecture diagram of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an image processing method according to a second embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus in a fourth embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
In the related art, the processing mode of the image to be processed is generally to process the whole picture. For example, when the image is subjected to the inverse processing, the intelligent device performs the inverse processing on the image according to the provided image style by using the image shot by the self-contained camera or the third-party application program; or, the shot picture is transmitted to the computer end, and the picture is subjected to phase inversion processing through image processing software (such as photoshop software) of the computer end. The above methods are all to perform the inverse processing on the whole picture, and the picture cannot be flexibly processed. Therefore, the flexibility of the image processing method is poor, and the processing method is complicated.
In order to solve the above problem, embodiments of the present disclosure provide an image processing method, an apparatus, a device, and a storage medium. Fig. 1 shows a basic architecture diagram of an image processing method. As shown in fig. 1, the basic architecture diagram includes an image input module 11, a target region recognition module 12, an image processing module 13, and an image output module 14. The image processing device comprises an image input module 11, an image processing module and a camera module, wherein the image input module is used for loading or inputting an image to be processed, classifying the image to be processed according to the format of the image to be processed, sorting the image to be processed with different formats into a uniform format, and integrating an image processing algorithm into a Hardware Abstraction Layer (HAL) of a third-party application program and/or the camera module so as to process the image to be processed with the uniform format; a target area identification module 12, configured to identify a target area of the image to be processed, where the target area may include one or more areas; the image processing module 13 is configured to process image data of each pixel in the target area based on an integrated image processing algorithm to obtain target image data, and for example, perform inverse processing on each pixel in the target area to obtain inverse processed target image data; and the image output module 14 is configured to output the processed image to obtain a target processed image.
The method aims to solve the problems that in the related art, the flexibility of an image processing mode is poor and the processing mode is complicated. The basic architecture diagram of image processing shown in fig. 1 is adopted, the image to be processed is formatted through the image input module 11, the target area is determined through the target area determination module 12, each pixel in the target area is processed through the image processing module 13, and the processed image is output through the image output module 14, so that the picture is flexibly processed, the image to be processed does not need to be input into image processing software for processing, and the image processing flow is simplified.
Next, an image processing method provided in the embodiment of the present disclosure will be described first.
Example one
The image processing method provided by the embodiment can be applied to the condition of performing personalized processing on the partial area of the image to be processed. The method may be performed by an image processing apparatus, the apparatus may be implemented by software and/or hardware, the apparatus may be integrated in a device with data operation function, such as a smart device or a server, and the smart device may be a smart phone, a tablet, and the like. Referring to fig. 2, the method of the present embodiment specifically includes the following steps:
and S110, acquiring the image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy.
The image to be processed refers to an image which needs to be personalized. The image to be processed may be an image of any format, for example, gpj format, gif format, pcd format, raw format, and the like. The image to be processed may be a person image, a landscape image, a medical image, or the like.
In the embodiment of the present disclosure, identifying the target region in the image to be processed based on a preset identification policy includes at least one of the following operations:
the method comprises the following steps: carrying out scene recognition on the image to be processed, and determining a target area in the image to be processed based on a scene recognition result;
the method 2 comprises the following steps: performing depth of field recognition on the image to be processed, and determining a target area in the image to be processed based on a depth of field recognition result;
the method 3 comprises the following steps: acquiring an externally determined starting position and an externally determined end position, and determining a target area in the image to be processed based on an area enclosed by the starting position and the end position;
the method 4 comprises the following steps: performing edge detection on the image to be processed, and determining a target area in the image to be processed based on an edge detection result;
the method 5 comprises the following steps: and performing color recognition on the image to be processed, and determining a target area in the image to be processed based on a color recognition result.
With respect to method 1, the scene recognition result may be at least one target object, and the target object may include a background and a subject of the image to be processed. The background of the image to be processed may include an indoor scene, a snow scene, a beach scene, a forest scene, etc.; the subject of the image to be processed may include a person, an animal, an object, and the like. In an optional embodiment, the image to be processed is input into a trained image recognition model to recognize the category of the target object in the image to be processed, so as to obtain at least one target object, the area where the at least one target object is located is used as a target area, or the category of the target object is displayed to a user from the area where the at least one target object is located, and the target area is screened out according to a click operation input from the outside. The image recognition model described in this embodiment may be obtained by training according to a sample image, and the image recognition model may include, but is not limited to, any one of a convolutional neural network, an artificial neural network, and a deep learning neural network.
For the method 2, in the embodiment of the present disclosure, in the process of taking a picture by the smart device, the depth of field parameter of the lens is set, the depth of field image is taken based on the depth of field parameter, the image without depth of field information obtained by shooting is stored and depth of field identification is performed, and the target area is determined. The depth of field parameters may include aperture size, shooting distance, and focal length of the lens. The larger the aperture, the shallower the depth of field (i.e., the blurred background), and the smaller the aperture, the deeper the depth of field (i.e., the sharp background); the closer the main body is to the lens, the shallower the depth of field, and the farther the main body is from the lens, the deeper the depth of field; the longer the focal length of the lens, the shallower the depth of field, the shorter the focal length of the lens, the deeper the depth of field. In an alternative embodiment, the method for identifying the depth of field of the image to be processed includes: acquiring depth-of-field data when a lens shoots a depth-of-field image; carrying out contour detection processing on the depth image, acquiring a contour region of at least one target object, and storing coordinates of reference points in the contour region of the at least one target object; calculating the depth of field distance of the reference pixel point and the depth of field distances of other pixel points according to the coordinates and the depth of field data of the reference point in the contour area of at least one object; screening out other pixel points within the range of the depth of field error based on the depth of field distance of the reference pixel point and the depth of field error stored in advance; and taking the region formed by the reference pixel point of at least one target object and other pixel points positioned in the range of the depth of field error as the target region, or displaying the region formed by the reference pixel point of at least one target object and other pixel points positioned in the range of the depth of field error to a user, and screening out the target region according to externally determined clicking operation.
With respect to method 3, in an embodiment of the present disclosure, at least a pair of a start position and an end position is obtained. The area enclosed by the starting position to the ending position may be an arbitrarily shaped area. Such as linear, square, circular, elliptical, or other polygonal shapes, etc. In an optional embodiment, determining a target region in the image to be processed based on a region enclosed by the start position and the end position includes: taking pixel points corresponding to the starting point position and the end point position as key pixel points; identifying the contour trend of at least one target object in the image to be processed, and determining a target area corresponding to the at least one target object according to the contour trend of the at least one target object and the key pixel points. Exemplarily, if the target object is spherical, the contour trend of the target object is arc-shaped, and a target area corresponding to the target object is determined based on the arc-shaped contour trend and key pixel points of the target object; and if the target object is in the shape of a cuboid, the contour trend of the target object is linear, and a target area corresponding to the target object is determined based on the linear contour trend and key pixel points of the target object. If the target object is irregular in shape, multiple pairs of starting positions and end positions can be obtained, partial outline shapes are determined based on at least two pairs of starting positions and end positions, all the determined partial outline shapes are spliced to obtain an area corresponding to at least one target object, the area corresponding to at least one target object is used as a target area, or the area corresponding to at least one target object is displayed to a user, and the target area is screened out according to externally determined clicking operation.
For the method 4, in the embodiment of the present disclosure, the edge detection refers to identifying a pixel point in the image to be processed, where the color change is obvious or the brightness change is obvious. In the specific implementation, the gray value of each pixel point in the image to be processed is obtained, edge detection is performed on the image to be processed based on a preset edge detection algorithm and the gray value of each pixel point to obtain a region corresponding to at least one target object, the region corresponding to the at least one target object is used as a target region, or the region corresponding to the at least one target object is displayed to a user, and the target region is screened out according to externally determined clicking operation. Optionally, the preset edge detection algorithm includes, but is not limited to, any one of a first-order edge detection operator, a second-order edge detection operator, and a differential edge detection algorithm.
For the method 5, in the embodiment of the present disclosure, color values of each pixel point of the image to be processed are obtained, where the color values refer to R (red) G (green) B (blue) of the pixel point, at least one target object is determined based on the color values, a region corresponding to the at least one target object is used as a target region, or the region corresponding to the at least one target object is displayed to a user, and the target region is screened out according to a click operation determined externally.
And S120, processing the image data of each pixel in the target area to obtain the target image data processed by the target area.
In the embodiment of the present disclosure, processing image data of each pixel in a target area to obtain target image data after processing the target area includes: acquiring initial image data of each pixel in a target area and a preset image adjustment rule; and adjusting the image data based on an image adjustment rule to obtain target image data after the target area is processed.
Optionally, the initial image data may include at least one of an initial color value, an initial transparency, an initial gray value, an initial saturation, an initial sharpness value, an initial dark portion value, and an initial bright portion value. Accordingly, the image adjustment rule may include at least one of a color value adjustment rule, a transparency adjustment rule, a gray value adjustment rule, a saturation adjustment rule, a sharpness adjustment rule, and a dark portion adjustment rule.
In the embodiment of the present disclosure, the step of adjusting the initial image data is explained by taking the initial image data as the initial color value and the image adjustment rule as the color value adjustment rule as an example. Based on the image adjustment rule, adjusting the initial image data to obtain the target image data after the target area is processed, wherein the method comprises at least one of the following operations:
adding the initial color value and a fixed value according to a color value adjustment rule to obtain an adjusted target color value;
obtaining a maximum color value according to a color value adjustment rule, subtracting the maximum color value from an initial color value, and taking the difference of the obtained color values as a target color value; wherein the maximum color value may be predetermined.
Determining a color adjustment proportion corresponding to the initial color value according to the color value adjustment rule and the initial color value; and multiplying the initial color value by the color adjustment proportion, and taking the obtained product as a target color value.
The fixed value may be predetermined, or the color interval corresponding to the initial color value may be determined, and the color adjustment value corresponding to the color interval is used as the fixed value of the initial color value. The maximum color value may be predetermined. The color adjustment proportion can be determined according to the color interval corresponding to the initial color value.
It should be noted that, the method only takes the initial image data as the initial color value, and the image adjustment rule is a color value adjustment rule as an example, and explains the step of adjusting the initial image data. The principle of adjusting the initial image data based on the initial adjustment data in other forms and the image adjustment rule corresponding to the initial adjustment data in other forms is consistent with the above example, and only adaptive replacement of the adjustment parameter can be achieved. And, the initial image data may be adjusted based on two or more image adjustment rules, and the adjusted image data may be superimposed or weighted-averaged to obtain the target image data after the target area processing. And, different target areas may be processed using the same or different image adjustment rules.
And S130, generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
Wherein the non-target area refers to an area other than the target area. For example, the image to be processed is a landscape image including a person, a blue sky, a white cloud, and a building, and if the area to which the blue sky belongs is the target area, the area to which the person, the white cloud, and the building other than the blue sky belong is regarded as the non-target area.
In the embodiment of the present disclosure, after the target image data is determined, the target image data is used as final image data in the target region, and the image data in the non-target region is combined to form the target processing image.
According to the technical scheme of the embodiment, at least one target area is identified for the acquired data to be processed, image processing is only carried out on each pixel in the target area, and a target processing image is generated based on the target image data processed by the target area and the image data in the non-target area. According to the method, the image to be processed does not need to be input into a computer end and processed through image processing software, the image processing flow is simplified, personalized processing is realized on the image to be processed, and the image processing effect is improved.
Example two
In this embodiment, on the basis of the first embodiment, the "adjusting the initial image data based on the image adjustment rule in step S120 to obtain the target image data after the target area processing" is refined. Optionally, the adjusting the initial image data based on the image adjustment rule to obtain the target image data after the target area is processed includes: and performing inversion processing on the initial color value of each pixel in the target area based on the color value adjustment rule in the image adjustment rule, and taking the inverted color value obtained by the inversion processing of the target area as the target image data. Wherein explanations of the same or corresponding terms as those of the above embodiments are omitted. Referring to fig. 3, the image processing method provided in this embodiment includes:
s210, acquiring the image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy.
S220, acquiring initial image data of each pixel in the target area and a preset image adjustment rule.
And S230, performing inversion processing on the initial color value of each pixel in the target area based on the color value adjustment rule in the image adjustment rule, and taking the inverted color value obtained by the inversion processing of the target area as target image data.
In the embodiment of the present disclosure, the inversion processing refers to converting a color value of a pixel into a complementary color of the color value. Specifically, the initial color value of each pixel in the target area is rotated by 180 degrees on the color wheel, so as to obtain the inverse color value of each pixel.
Wherein, the calculation formula of the phase inversion processing is as follows: r ' 180-R, G ' 180-G, B ' 180-B. R, G and B are the initial color values of the respective pixels, and R ', G ' and B ' are the inverse color values of the respective pixels, i.e., the inverse color values are the target image data of the respective pixels of the target area.
The initial color values of the pixels in the target area are subjected to reverse phase processing, the color values of the pixels in the target area can be replaced by complementary colors of the color values, so that the personalized processing of the image to be processed is realized, the target image data is compared with the unprocessed image data, a picture effect with the cutting crack property, the contrast property and the story feeling can be realized, the image processing effect is improved, and the improvement of the visual experience of a user is facilitated.
In the embodiment of the present disclosure, before processing the image data of each pixel in the target region, the image style corresponding to the target region may also be determined, and the target adjustment rule corresponding to the image style is screened out from the image adjustment rules.
In the embodiment of the present disclosure, adjusting the initial image data based on the image adjustment rule to obtain the target image data after the target area is processed includes: and adjusting the initial image data based on the target adjustment rule to obtain target image data processed by the target area.
The image style comprises multiple display styles of multiple target objects in the image. For example, if the target object is a scene, the image style includes, but is not limited to, a sunny style, a cloudy style, a sand-dust style, a sleet style, etc.; if the target object is a person, the image style includes, but is not limited to, a sweet style, a gorgeous style, a bright style, and the like. The image styles may also include a high definition style, a super definition style, a sketch style, an oil painting style, and the like.
Specifically, adjusting the initial image data based on the target adjustment rule to obtain the target image data after the target area is processed includes: determining target image data corresponding to the initial image data based on a target adjustment rule; and adjusting the initial image data based on the difference value between the initial image data and the target image data to obtain the target image data after the target area is processed.
For example, if the image to be processed is an image shot on a cloudy day, and the determined image style is a sunny style, a target adjustment rule corresponding to the sunny style is screened out from the image adjustment rules, a target color value of the sky and a target color value of the cloud in a sunny state are determined based on the target adjustment rule corresponding to the sunny style, an initial color value of the sky is adjusted according to a difference value between the target color value of the sky and the initial color value of the sky, and the initial color value of the cloud is adjusted according to a difference value between the target color value of the cloud and the initial color value of the cloud to change the color of the sky in the image to be processed to be bluer and the color of the white cloud to be whiter, so that target image data processed in a target area is obtained.
S240, covering the target image data in the target area, and splicing the target image data in the target area with the image data in the non-target area to generate a target processing image.
In the embodiment of the present disclosure, generating a target processing image according to target image data after target area processing and image data in a non-target area includes: and covering the target image data in the target area, and splicing the target image data in the target area with the image data in the non-target area to generate a target processing image.
Specifically, target image data and initial image data of each pixel in a target region are acquired, the target image data is substituted for the initial image data to cover the target image data in the target region, and the target image data in the target region is spliced with the image data in the non-target region based on coordinate data or feature data of each pixel in the target region and the non-target region to generate a target processing image.
It should be noted that, in the above S210 to S240, the pixels in the target region of the image to be processed are processed in a unified manner. In a specific implementation, each target region of the image to be processed may be further divided into at least two sub-regions, and the image data of each sub-region is processed.
In an embodiment of the present disclosure, a method for processing image data of each pixel in the target area to obtain target image data after processing of the target area includes: acquiring at least one key pixel point of a target area as a central pixel point; determining at least one subregion based on pixel points in a preset neighborhood range of at least one central pixel point; determining an image adjustment rule of each subregion according to the image data of each pixel in each subregion and the image data of each pixel in other adjacent regions of each subregion; and processing each subarea of the target area based on the image adjustment rule of each subarea to obtain target image data corresponding to each subarea.
The key pixel point may be a point on the boundary of the target region, a point on the center line of the target region, or a point at another position in the target region.
Illustratively, the image to be processed is a water flow image, the target area is an area including water flow, and the image style corresponding to the target area is a high definition style. From human visual point of view analysis, the farther the water stream is from the observer or lens, the darker the water color, and the closer the water stream is to the observer or lens, the lighter the water color. Therefore, the target area can be divided into different sub-areas based on the distance between the target area and an observer or a lens, the image adjustment rule of each sub-area is determined according to the image data of each pixel in each sub-area and the image data of each pixel in other adjacent areas of each sub-area, each sub-area of the target area is processed based on the image adjustment rule of each sub-area, the target image data corresponding to each sub-area is obtained, the color value in the target area is gradually changed from far to near, the target image data in the target area is more in line with reality, and the observation experience of the observer is improved.
Through the method, each subarea of the target area is subjected to personalized processing, the gradual change effect of the target area is displayed, target image data which better accords with the reality is obtained, the image processing effect is further improved, and the observation experience of a user is favorably improved.
The following is an embodiment of an image processing apparatus according to an embodiment of the present invention, which belongs to the same inventive concept as the image processing methods of the above embodiments, and reference may be made to the above embodiments of the image processing method for details that are not described in detail in the embodiments of the image processing apparatus.
EXAMPLE III
The present embodiment provides an image processing apparatus, and referring to fig. 4, the apparatus specifically includes:
an image obtaining module 310, configured to obtain an image to be processed;
a target area identification module 320, configured to identify a target area in the image to be processed based on a preset identification policy;
the image processing module 330 is configured to process image data of each pixel in the target area to obtain target image data after processing of the target area;
and the target processing image generating module 340 is configured to generate a target processing image according to the target image data after the target area processing and the image data in the non-target area.
Through the image processing device of the third embodiment of the invention, personalized processing of partial image areas of the picture is realized, and the image processing effect is improved.
Optionally, the target area identifying module 320 is specifically configured to perform at least one of the following operations:
carrying out scene recognition on the image to be processed, and determining a target area in the image to be processed based on a scene recognition result;
performing depth of field recognition on the image to be processed, and determining a target area in the image to be processed based on a depth of field recognition result;
acquiring an externally determined starting position and an externally determined end position, and determining a target area in the image to be processed based on an area enclosed by the starting position and the end position;
performing edge detection on the image to be processed, and determining a target area in the image to be processed based on an edge detection result;
and performing color recognition on the image to be processed, and determining a target area in the image to be processed based on a color recognition result.
Optionally, the image processing module 330 is specifically configured to obtain initial image data of each pixel in the target region and a preset image adjustment rule;
and adjusting the initial image data based on the image adjustment rule to obtain the target image data processed by the target area.
Optionally, the image processing module 330 is specifically configured to, based on a color value adjustment rule in the image adjustment rule, perform inverse processing on the initial color value of each pixel in the target area, and use an inverse color value obtained by inverse processing of the target area as the target image data.
Optionally, the apparatus further comprises: the system comprises an image style determining module and a target adjusting rule screening module; the image style determining module is used for determining the image style corresponding to the target area; and the target adjustment rule screening module is used for screening out a target adjustment rule corresponding to the image style from the image adjustment rules.
Optionally, the image processing module 330 is specifically configured to adjust the initial image value based on the target adjustment rule, so as to obtain target image data after the target area is processed.
Optionally, the target processed image generating module 340 is specifically configured to overlay the target image data in the target region, and splice the target image data in the target region with the image data in the non-target region to generate the target processed image.
The image processing device provided by the embodiment of the invention can execute the image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Referring to fig. 5, the present embodiment provides an image processing apparatus 400 including: one or more processors 420; the storage device 410 is used for storing one or more programs, and when the one or more programs are executed by the one or more processors 420, the one or more processors 420 implement the image processing method provided by the embodiment of the present invention, including:
acquiring an image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy;
processing the image data of each pixel in the target area to obtain target image data processed by the target area;
and generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
Of course, those skilled in the art will understand that the processor 420 may also implement the technical solution of the image processing method provided by any embodiment of the present invention.
The image processing apparatus 400 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 5, the image processing apparatus 400 includes a processor 420, a storage device 410, an input device 430, and an output device 440; the number of the processors 420 in the device may be one or more, and one processor 420 is taken as an example in fig. 5; the processor 420, the storage device 410, the input device 430 and the output device 440 of the apparatus may be connected by a bus or other means, for example, by a bus 550 in fig. 5.
The storage device 410, which is a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present invention (for example, an image acquisition module, a target area identification module, an image processing module, and a target processing image generation module in the image processing device).
The storage device 410 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the storage 410 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 410 may further include memory located remotely from the processor 420, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the apparatus, and may include at least one of a mouse, a keyboard, and a touch screen, for example. The output device 440 may include a display device such as a display screen.
EXAMPLE five
The present embodiments provide a storage medium containing computer-executable instructions which, when executed by a computer processor, are operable to perform a method of image processing, the method comprising:
acquiring an image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy;
processing the image data of each pixel in the target area to obtain target image data processed by the target area;
and generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the image processing method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute the image processing method provided in the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed, and identifying a target area in the image to be processed based on a preset identification strategy;
processing the image data of each pixel in the target area to obtain target image data processed by the target area;
and generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
2. The method according to claim 1, wherein the identifying the target region in the image to be processed based on a preset identification strategy comprises at least one of the following operations:
carrying out scene recognition on the image to be processed, and determining a target area in the image to be processed based on a scene recognition result;
performing depth of field recognition on the image to be processed, and determining a target area in the image to be processed based on a depth of field recognition result;
acquiring an externally determined starting position and an externally determined end position, and determining a target area in the image to be processed based on an area enclosed by the starting position and the end position;
performing edge detection on the image to be processed, and determining a target area in the image to be processed based on an edge detection result;
and performing color recognition on the image to be processed, and determining a target area in the image to be processed based on a color recognition result.
3. The method according to claim 1, wherein the processing the image data of each pixel in the target region to obtain the target image data after the target region processing comprises:
acquiring initial image data of each pixel in the target area and a preset image adjustment rule;
and adjusting the initial image data based on the image adjustment rule to obtain the target image data processed by the target area.
4. The method of claim 3, wherein the adjusting the initial image data based on the image adjustment rule to obtain the target image data after the target area processing comprises:
and performing inversion processing on the initial color value of each pixel in the target area based on the color value adjustment rule in the image adjustment rule, and taking the inverted color value obtained by the inversion processing of the target area as the target image data.
5. The method of claim 3, wherein prior to said processing image data for each pixel in the target region, the method further comprises:
and determining the image style corresponding to the target area, and screening out the target adjustment rule corresponding to the image style from the image adjustment rules.
6. The method of claim 5, wherein the adjusting the initial image data based on the image adjustment rule to obtain the target image data after the target area processing comprises:
and adjusting the initial image data based on the target adjustment rule to obtain the target image data processed by the target area.
7. The method of claim 1, wherein generating a target processed image from the target image data after the target area processing and image data in a non-target area comprises:
and covering the target image data in the target area, and splicing the target image data in the target area with the image data in the non-target area to generate the target processing image.
8. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be processed;
the target area identification module is used for identifying a target area in the image to be processed based on a preset identification strategy;
the image processing module is used for processing the image data of each pixel in the target area to obtain the target image data processed by the target area;
and the target processing image generation module is used for generating a target processing image according to the target image data processed by the target area and the image data in the non-target area.
9. An image processing apparatus, characterized in that the apparatus comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 7.
CN202110320688.1A 2021-03-25 2021-03-25 Image processing method, device, equipment and storage medium Pending CN113194245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110320688.1A CN113194245A (en) 2021-03-25 2021-03-25 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110320688.1A CN113194245A (en) 2021-03-25 2021-03-25 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113194245A true CN113194245A (en) 2021-07-30

Family

ID=76973820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110320688.1A Pending CN113194245A (en) 2021-03-25 2021-03-25 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113194245A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593828A (en) * 2013-11-13 2014-02-19 厦门美图网科技有限公司 Image processing method capable of carrying out partial filter adding
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
CN106851124A (en) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 Image processing method, processing unit and electronic installation based on the depth of field
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111028137A (en) * 2018-10-10 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593828A (en) * 2013-11-13 2014-02-19 厦门美图网科技有限公司 Image processing method capable of carrying out partial filter adding
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
CN106851124A (en) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 Image processing method, processing unit and electronic installation based on the depth of field
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111028137A (en) * 2018-10-10 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张善文: "《图像模式识别》", 1 June 2020 *

Similar Documents

Publication Publication Date Title
CN106778928B (en) Image processing method and device
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN104811684B (en) A kind of three-dimensional U.S. face method and device of image
JP2020530920A (en) Image lighting methods, devices, electronics and storage media
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
CN111066026B (en) Techniques for providing virtual light adjustment to image data
CN111882627A (en) Image processing method, video processing method, device, equipment and storage medium
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN109214996A (en) A kind of image processing method and device
CN113327316A (en) Image processing method, device, equipment and storage medium
CN113781370A (en) Image enhancement method and device and electronic equipment
CN105580050A (en) Providing control points in images
CN114820292A (en) Image synthesis method, device, equipment and storage medium
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN107133932A (en) Retina image preprocessing method and device and computing equipment
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN117061882A (en) Video image processing method, apparatus, device, storage medium, and program product
CN113724282A (en) Image processing method and related product
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN111127367A (en) Method, device and system for processing face image
CN113194245A (en) Image processing method, device, equipment and storage medium
CN105991939A (en) Image processing method and device
Zhang et al. A compensation textures dehazing method for water alike area
EP1374169A2 (en) Application of visual effects to a region of interest within an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210730

RJ01 Rejection of invention patent application after publication