CN112839223B - Image compression method, image compression device, storage medium and electronic equipment - Google Patents

Image compression method, image compression device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112839223B
CN112839223B CN202011547685.3A CN202011547685A CN112839223B CN 112839223 B CN112839223 B CN 112839223B CN 202011547685 A CN202011547685 A CN 202011547685A CN 112839223 B CN112839223 B CN 112839223B
Authority
CN
China
Prior art keywords
image
pixel
compression
foreground object
pixel area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011547685.3A
Other languages
Chinese (zh)
Other versions
CN112839223A (en
Inventor
孙永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Coolpad Technology Co ltd
Original Assignee
Shenzhen Coolpad Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Coolpad Technology Co ltd filed Critical Shenzhen Coolpad Technology Co ltd
Priority to CN202011547685.3A priority Critical patent/CN112839223B/en
Publication of CN112839223A publication Critical patent/CN112839223A/en
Application granted granted Critical
Publication of CN112839223B publication Critical patent/CN112839223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]

Abstract

The embodiment of the application discloses an image compression method, an image compression device, a storage medium and electronic equipment, wherein the method comprises the following steps: performing foreground object identification on an original image, determining a first pixel area and a second pixel area in the original image, compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode, and generating a compressed image corresponding to the original image. By adopting the embodiment of the application, the image compression rate can be improved while the image display effect is considered.

Description

Image compression method, device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image compression method and apparatus, a storage medium, and an electronic device.
Background
With the development of communication technology, images become the most main media form in the rich media format of the mobile internet, and the rapid sharing of images becomes necessary due to the fact that a large number of images are contained in a webpage browsed by a mobile phone and the fire and heat of application scenes such as image shooting and image sharing of instant messaging application are involved in daily life of a user. At present, an image compression mode can be adopted for an image with a large memory so as to realize rapid image sharing.
Disclosure of Invention
The embodiment of the application provides an image compression method, an image compression device, a storage medium and electronic equipment, which can improve image compression while giving consideration to image display effect. The technical scheme of the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides an image compression method, where the method includes:
performing foreground object identification on an original image, and determining a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area except the first pixel area in the original image;
compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein the image compression rate of the second compression method is lower than the image compression rate of the first compression method.
In a second aspect, an embodiment of the present application provides an image compression apparatus, including:
the object identification module is used for carrying out foreground object identification on an original image and determining a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area except the first pixel area in the original image;
the image compression module is used for compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein the image compression rate of the second compression method is lower than the image compression rate of the first compression method.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the present application, a terminal performs foreground object identification on an original image, and determines a first pixel area and a second pixel area in the original image, where the first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area in the original image except for the first pixel area; compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method. The method has the advantages that the foreground object of the original image is identified, the first pixel area and the second pixel area are further determined, compression modes with different image compression ratios are adopted for compression respectively, the problem that the compression ratio is low when all areas of the whole image are compressed according to the unified compression ratio is avoided, the image compression ratio can be improved while the image display effect is considered, and image application scenes such as image transmission, storage and sending are facilitated; and the same compression rate is used for different areas, so that the image visual effect is better.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of an image compression method provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of another image compression method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an image compression apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a region compression module according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a special effect determining unit provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a region compressing unit according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 9 is an architectural diagram of the android operating system of FIG. 7;
FIG. 10 is an architectural diagram of the IOS operating system of FIG. 7.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is to be noted that, unless otherwise explicitly specified and limited, the words "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The present application will be described in detail with reference to specific examples.
In one embodiment, as shown in fig. 1, an image compression method is proposed, which can be implemented by means of a computer program and can be run on an image compression apparatus based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
The image compression apparatus may be a terminal, and the terminal may be an electronic device having an image compression function, including but not limited to: wearable devices, handheld devices, personal computers, tablet computers, in-vehicle devices, smart phones, computing devices or other processing devices connected to a wireless modem, and the like. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
Specifically, the image compression method includes:
step S101: carrying out foreground object identification on an original image, and determining a first pixel area and a second pixel area in the original image.
An image refers to a description or portrayal of a similarity, vividness, or of a natural thing or an objective object (human, animal, plant, landscape, etc.), or said image can be understood as a representation of a natural thing or an objective object (human, animal, plant, landscape, etc.) that contains information about the object being described. Usually the image is a picture with a visual effect. The original image, in this embodiment, can be understood as an image to be subjected to image processing (image compression). The original image may be a photograph, a drawing, a clip art, a map, a satellite cloud picture, a movie picture, an X-ray film, an electroencephalogram, an electrocardiogram, etc., and in practical applications, the original image is usually used for generating a target image after image processing. Compared with an original image, the target image has smaller memory occupation, is convenient for image transmission, and can be quickly received by a receiver for receiving image data.
Furthermore, in an image application scenario, an occupied memory of an original image is generally large, and in order to transmit the image to an image receiver more quickly, the original image is compressed, and the compressed image is generated by compressing the original image based on a certain compression rate, so that the storage, the transmission and the processing are facilitated.
In practical application, the compression is performed by adopting a fixed compression ratio for the whole image, but the image semantics are ignored, the whole image is compressed by adopting the same compression ratio, in order to ensure that the common compression ratio is low in image distortion, the common image data in one original image has more redundancy, and for some image application scenes, the whole semantics of the whole image is not required to be reserved. In the present application, the redundancy of image data in an image application scene mainly focuses on the following conditions: spatial redundancy caused by correlation between adjacent pixels of a non-foreground part in an image; the time redundancy caused by the correlation exists between different frames in the non-foreground part of the image sequence; spectral redundancy caused by the correlation of different color planes or spectral bands in non-foreground portions of an image. The purpose of data compression is to further compress a non-foreground part by adopting a high compression rate on the basis of compressing an original image, and the number of bits required for representing data is reduced by removing data redundancy in a high compression mode. Since the amount of image data is enormous, it is very difficult to store, transmit, and process the image data, and therefore efficient compression of the image data is very important.
The first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area except the first pixel area in the original image;
the foreground object can be understood as a natural object or an objective object (human, animal, plant, landscape, etc.) emphasized by the image semantics in the original image, and if the figure image is the figure, the foreground object can be understood as the figure emphasized by the image semantics in the original image; if the image of the animal is the foreground object, the foreground object can be understood as the animal with emphasis on the image semantics in the original image; if the landscape image is used, the foreground object can be understood as the landscape which is emphasized by the image semantics in the original image.
The second pixel area is a pixel area in the original image except the first pixel area, and can also be understood as a background object corresponding to a foreground object in the original image.
In practical application, in the process of performing foreground object identification on an original image, a foreground object can be determined in advance, if the foreground object is determined to be a person, the foreground object is used as a label of the original image and is input into a foreground object identification model, a first pixel area and a second pixel area in the original image can be output quickly, and the identification speed of the model is improved.
In a possible implementation manner, the terminal may receive a foreground object input by a user for the original image, and perform foreground object identification on the original image;
optionally, the terminal may obtain a keyword manually input by the user about the foreground object, the terminal may match a corresponding foreground object template based on the keyword, and a first pixel region and a second pixel region corresponding to the foreground object may be determined from the original image by using the foreground object template, where the foreground object template includes an image feature of the foreground object, and the image pixel region where the foreground object is located may be matched from the original image based on the image feature.
Optionally, in the application, the input foreground object is not a region where the foreground object is directly delineated in the original image by a human, but a medium representing the foreground object, such as a text medium, an image medium, and the like, such as a keyword of a foreground object input by a user, may be obtained. The foreground object identification process is realized based on a pre-trained foreground object identification mode, and the original image and the labeled object identification label are generally input into a foreground object identification model, so that a first pixel area and a second pixel area in the original image can be output.
In a possible implementation manner, scene semantic information related to the original image is obtained, and semantic processing is performed based on the scene semantic information to determine a foreground object corresponding to the original image.
The scene semantic information is semantic description information of an image related to an image application scene, if in an instant chat application scene, a user can share a shot image, and generally relates to text description or voice description of the image before sending the image, namely semantic information related to the image in the chat application scene, a terminal can acquire scene semantic information-text description, and then perform semantic processing on the scene semantic information to obtain a foreground object corresponding to the original image, such as a keyword for representing the foreground object, wherein the semantic processing process can be performed based on a semantic information recognition model in a neural network model, and based on the semantic information recognition model, semantic description information is input to a voice information recognition mode, so that the foreground object can be obtained.
Optionally, the Neural Network model may be implemented based on fitting of one or more of a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, a Recurrent Neural Network (RNN), a model, an embedding (embedding) model, a Gradient Boosting Decision Tree (GBDT) model, a Logistic Regression (LR) model, and the like.
In practical application, the process of identifying the foreground object of the original image can directly adopt a mode of identifying the foreground object without determining the foreground object in advance.
In a feasible implementation manner, a manner of foreground object automatic detection may be adopted, and a foreground object library required by foreground object identification is pre-established by the terminal, where the foreground object library includes a large number of foreground object templates. After the terminal acquires the original image, extracting image features in the original image, and then determining a target foreground object template matched with the foreground object features in each foreground object contained in a foreground object library. Then according to the area division rule mapped by the target foreground object template, carrying out area division on the original image, adopting the mode matched with the foreground object template to automatically detect the foreground object range area of the input image, wherein each template is stored in a computer in advance, and further, the main image area corresponding to the foreground object can be framed by a predefined geometric shape. For example, if the foreground subject image is a person, the person detection template may be used to determine a foreground region range, and then determine a first pixel region and a second pixel region corresponding to the original image.
In one possible implementation, a model identification method based on a neural network model,
specifically, the terminal inputs an original image into the foreground object recognition model, and outputs a first pixel area and a second pixel area corresponding to the original image.
Specifically, image sample data is acquired in advance, a foreground object label is marked on at least one sample image contained in the image sample data, an initial neural network model is created, the image sample data is input into the initial neural network model, and an object identification result corresponding to the image sample data is output; and then training the initial neural network model based on the object recognition result and the foreground object label to obtain the trained foreground object recognition model.
Further, the foreground object recognition model may be one of deep learning network-based models, and an initial neural network model is created in advance as an initial foreground object recognition model, and all or part of the sample image may be obtained from an existing image database (such as a wide FACE data set, an IJB-C test set, an AVA data set, and the like), and/or a sample image taken in an actual environment by using a device with a photographing function may be obtained. The method comprises the steps of obtaining a large amount of image sample data containing sample images, preprocessing the sample images, wherein the preprocessing comprises the processing processes of digitization, geometric transformation, normalization, smoothing, restoration enhancement and the like, eliminating irrelevant information in the sample images, labeling a specified number of foreground object marks (the specific number is determined based on the actual foreground object detection environment, such as 128 object mark points (namely foreground object marks) are determined), monitoring model training by using the foreground object marks as supervision signals, extracting sample image characteristics from the sample images, inputting the sample image characteristics into an initial foreground object identification model for training, and outputting object identification results corresponding to the image sample data, namely a first sample pixel area corresponding to the foreground objects and a second sample pixel area except the foreground objects.
Further, in order to accelerate the recognition speed of the initial foreground object recognition model, in the application, the foreground object recognition model is used for fitting a common two-dimensional convolution by using image (image) convolution in the training process, so that the training speed of the foreground object recognition model is accelerated. And setting a loss function of the multi-task training, and judging whether the foreground object recognition model in the current training stage is converged. If the convergence indicates that the error between each predicted evaluation result and the corresponding evaluation result label (i.e. foreground object label) reaches the expected value, for example, the output of the preset loss function is smaller than the preset threshold. The evaluation result of the foreground object identification model on the image quality is relatively accurate, and if the evaluation result is not converged, the error between each actual output value and the corresponding expected output value is shown and does not reach the expected value. At this point, the connection weight values and thresholds for each layer are adjusted along the output path based on the error of the desired output value from the actual output value propagating back from the output layer.
For further explanation, the labeling process of the foreground object label is described as follows:
the method comprises the steps that a terminal obtains image sample data, and then a foreground object label is marked on at least one sample image contained in the image sample data;
specifically, the terminal determines a reference foreground object corresponding to at least one sample image contained in the image sample data, and labels pixel points of the reference foreground object in the sample image in a target channel labeling mode, if the foreground object of the character image is determined to be a character, and labels the pixel points corresponding to the foreground object-the character in the character image, so that a corresponding foreground object label corresponding to the sample image can be generated, namely the pixel labeled points are used as foreground object labels.
Illustratively, the target channel labeling mode may be a single-channel labeling mode, and the terminal performs image segmentation and labeling on each pixel point of an image region including a foreground and a background, divides the foreground region and the background region, and binarizes the divided foreground region and the divided background region to obtain a binarized image (i.e., a binarized image composed of a first sample pixel region and a second sample pixel region).
Exemplarily, after the image segmentation, the pixel value of each pixel point in the foreground region of the segmented foreground object is marked as 1, and the pixel value of each pixel point in the background region except the foreground region is marked as 0, at this time, the pixel value of each pixel point in the binarized image is 1.
Step S102: and compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image.
Wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method.
In a possible embodiment, the first compression method and the second compression method may be determined according to an image application environment, and an adaptive compression method may be determined based on an image application scene, where usually, the image precision of an original image is high, but in practical applications, images with high precision are not needed in many scenes, for example, in an instant chat scene, a chat user may only need to observe a rough image of a foreground object, so in the image application scene of the present application, the device resolution, the image display size, and the image display scene (such as a chat scene, an image sharing scene (such as a friend sending circle), an image batch scene, and the like) in the image application scene are obtained, a virtual environment display model is constructed based on the device resolution, the image display size, and the image display scene, a large number of experimental samples under the virtual environment display model are configured in a mathematical analysis manner to evaluate a display image output by the virtual environment display model, and an optimal compression combination corresponding to each set of the device resolution, the image display size, and the image display scene is finally determined based on an evaluation result of each sample, that the first compression method is used for compressing a first reference compression area and a second compression area (used for compressing a second reference area). In the sample evaluation process, means such as machine learning, expert evaluation and the like can be introduced to quickly converge the evaluation result of the virtual environment display model. Thereby establishing a mapping relationship of each set of "device resolution, image display size, and image display scene" with the reference compression combination.
Further, in practical application, the terminal only needs to acquire a target device resolution (a device resolution for displaying a final image), an image display size (an interface display size for displaying the final image) and an image display scene (such as one of a chat scene, an image sharing scene (such as a friend sending circle), an image batch scene, and the like) corresponding to the image application scene, and based on the mapping relationship, a first reference compression mode and a second reference compression mode can be acquired, the first reference compression mode is used as a first compression mode of the image compression of this time, the second reference compression mode is used as a second compression mode of the image compression of this time, and a compression rate of the first reference compression mode is greater than that of the second reference compression mode.
In a possible implementation manner, the first compression manner and the second compression manner may be determined by the terminal in combination with communication quality corresponding to image transmission, different communication qualities correspond to compression manners of different schemes, the terminal may obtain the communication quality during image transmission, and obtain a compression manner corresponding to a scheme of the communication quality, where a compression manner of the same scheme corresponds to the first reference compression manner and the second reference compression manner.
In a possible implementation manner, the first compression manner and the second compression manner may be defaulted for the terminal, and the terminal may also provide a human-computer interaction interface at a later stage, so that a user changes settings on the human-computer interaction interface.
The compression method may be lossy data compression or lossless data compression, the lossy data compression includes, but is not limited to, lossless data compression including, but not limited to, run length coding, entropy coding, adaptive dictionary algorithm of LZW, and the lossy data compression includes, but is not limited to, chroma sampling algorithm, color reduction algorithm, image transform coding, and the like.
In this application, the compression types of the first compression method and the second compression method may be determined based on an actual application environment, and in this application, only the compression rate of the first compression method needs to be greater than that of the second compression method, and the image precision after the first region compression is greater than that after the second region compression, and the specific type is not specifically limited here.
In the embodiment of the application, a terminal identifies a foreground object in an original image, and determines a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to the foreground object, and the second pixel area is a pixel area except the first pixel area in the original image; compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method. The method has the advantages that the foreground object of the original image is identified, the first pixel area and the second pixel area are further determined, compression modes with different image compression ratios are adopted for compression respectively, the problem that the compression ratio is low when all areas of the whole image are compressed according to the unified compression ratio is avoided, the image compression ratio can be improved while the image display effect is considered, and image application scenes such as image transmission, storage and sending are facilitated; and better image visual effect than the same compression ratio for different areas
Referring to fig. 2, fig. 2 is a schematic flowchart of another embodiment of an image compression method according to the present application. Specifically, the method comprises the following steps:
step S201: carrying out foreground object identification on an original image, and determining a first pixel area and a second pixel area in the original image.
For details, refer to step S101, which is not described herein again.
Step S202: determining a background special effect matching the first pixel region.
In practical application, in order to avoid the integral display effect of the image from being inconsistent due to the fact that the foreground region is excessively highlighted in the process of compressing the original image in the sub-region mode, in the application, the difference of pixel values between the foreground region and the background region can be balanced in a corresponding mode, and therefore the display effect of the compressed image output in the later period and the image corresponding to the foreground region and the background region is improved.
In the execution steps related to the embodiment, the situation that the compressed image in the later period is distorted due to different image compression rates can be balanced by adjusting the background special effect on the second pixel area, and the pixel difference of each pixel point between the second pixel areas is balanced by beautifying the background special effect on the second pixel area in advance, so that the compressed image with good display effect is finally output.
In one possible embodiment: the terminal can perform image semantic recognition on the original image and determine an image scene corresponding to the first pixel region;
the image scene can be outdoor sports, a tourism scene, a food scene, a portrait scene and the like.
The terminal can extract the image characteristics of the whole original image and perform image semantic recognition based on the image characteristics, one mode can be that the terminal is preset with reference image characteristics corresponding to reference image scenes, and the terminal can calculate the characteristic matching degree of the image characteristics and the reference image characteristics corresponding to the reference image scenes and determine the image scene indicated by the highest matching degree; one mode can be that the terminal creates an image semantic recognition model based on a deep neural network in advance, sample characteristics are extracted from sample data by obtaining a large amount of sample image data, an image scene label is labeled on the sample image data, then the sample image data is input into the image semantic recognition model for training, and parameters of the image semantic recognition model are calibrated based on the image scene label, so that the trained image semantic recognition model can be obtained.
The terminal can use image beautification processing software and an image beautification engine to make at least one background special effect of the reference image scene in advance, establish the corresponding relation between the reference image scene and the reference background special effect, and then only need to obtain the background special effect matched with the image scene based on the determined image scene of the original image in practical application. The background special effect may be a background blurring special effect, a background filter special effect, a background film special effect, a background style special effect, or the like.
Step S203: and determining at least one pixel sub-area from the second pixel area based on the background special effect, and determining the reference compression rate of each pixel sub-area.
The method includes the steps that different background special effects can be mapped with regional compression rates of different background object types, generally, an image of a second pixel region also has image layering, the pixel sub-regions of the different object types are compressed by the same reference compression rate, background special effects are added to the pixel sub-regions (execution time sequence of the background special effects and the compression of the second pixel region are not sequential), and finally, the generated compressed image has probability distortion to cause poor image display effect of the background region.
Further, when a background special effect is configured in advance, corresponding compression configuration information is set for different background special effects, and compression configuration information corresponding to the background special effect is obtained, wherein the compression configuration information comprises area division information and area compression ratio information; the region division information comprises the types of different region division objects of the background special effect and the region shapes of the region division types, for example, a certain background special effect focuses on dividing the types of mountain objects and sea objects, the region shapes of the three object types are triangular, the sea object type is rectangular and the like; the region compression rate information is a reference compression rate corresponding to a pixel sub-region of a different region division object.
Further, the terminal extracts an image object feature corresponding to the second pixel region, and then may determine at least one pixel sub-region from the second pixel region based on the compression configuration information and the image object feature, where the region division type of each pixel sub-region is different;
in specific implementation, the terminal may perform image object identification by extracting image object features corresponding to the second pixel region, and determine a plurality of image objects included in the second pixel region and pixel point regions to which the image objects are distributed; then, a target object (which is an object that needs to be compressed at different reference compression rates) corresponding to the background special effect is determined from the plurality of image objects based on the compression configuration information, and a pixel sub-region is determined according to a region division type of the target object, so that at least one pixel sub-region can be determined from the second pixel region. The region division types of the pixel subregions are different;
and the terminal acquires the reference compression rate corresponding to each pixel sub-region from the compression configuration information based on the region division type respectively corresponding to the at least one pixel sub-region.
Step S204: adding a background special effect to the second pixel area, and compressing each pixel sub-area by adopting a second compression mode based on the reference compression rate.
Specifically, an image special effect processing mode corresponding to the background special effect is adopted to perform image processing on each pixel point in the second pixel region, for example, the image special effect processing mode can adjust each pixel point to a certain indication value according to an adjustment parameter (namely, an image attribute parameter) for determining each image attribute indicated by the background special effect. Further, after the background special effect processing of the second pixel region is completed, each of the pixel sub-regions is then compressed by a second compression method based on the reference compression rate.
For example, for brightness adjustment corresponding to a background special effect, the values of the RGB components of each pixel point in the second pixel region may be directly adjusted according to the special effect brightness adjustment parameter corresponding to the determined image attribute parameter, for example, the values are all increased by 20% or all decreased by 20%, or adjusted to a specified value to achieve the effect of brightness increase and decrease. In addition, the RGB image of each pixel point in the second pixel area can be converted into a YUV (Y represents brightness, and U and V represent chroma) image, and the value of the Y component in the YUV image is adjusted, so that the effects of increasing and decreasing the brightness are achieved.
For example, for contrast adjustment corresponding to a background special effect, the color distribution of the second pixel region may be adjusted according to a contrast adjustment parameter corresponding to the determined image attribute parameter, so as to disperse or concentrate the color regions. For example, a histogram equalization method may be adopted to extract a gray level histogram of the second pixel region, and after the gray level histogram changes from a certain gray level interval in the comparison set to a uniform distribution in the whole gray level range, the image pixel values may be reallocated according to the determined contrast adjustment parameter, thereby achieving the adjustment of the image contrast.
In some embodiments, the step of performing "adding a background special effect to the second pixel region" is not in sequence with the step of performing "compressing each of the pixel sub-regions by using the second compression method based on the reference compression rate.
Step S205: and compressing the first pixel region by adopting a first compression mode.
Step S206: and generating a compressed image corresponding to the original image.
For details, refer to step S102, which is not described herein again.
In the embodiment of the application, a terminal identifies a foreground object of an original image, and determines a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to the foreground object, and the second pixel area is a pixel area except the first pixel area in the original image; compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein the image compression rate of the second compression method is lower than the image compression rate of the first compression method. The method comprises the steps of identifying a foreground object of an original image, further determining a first pixel area and a second pixel area, and respectively compressing by adopting compression modes with different image compression rates, so that the problem of low compression rate when all areas of the whole image are compressed according to a uniform compression rate is solved, the image compression rate can be improved while the image display effect is considered, and image application scenes such as image transmission, storage, sending and the like are facilitated; and better image visual effect than the same compression ratio for different areas
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 3, a schematic structural diagram of an image compression apparatus according to an exemplary embodiment of the present application is shown. The image compression apparatus may be implemented as all or part of an apparatus by software, hardware, or a combination of both. The apparatus 1 comprises a region determination module 11 and an image compression module 12.
The region determining module 11 is configured to perform foreground object identification on an original image, and determine a first pixel region and a second pixel region in the original image, where the first pixel region is a first pixel region corresponding to a foreground object, and the second pixel region is a pixel region in the original image except the first pixel region;
a region compression module 12, configured to compress the first pixel region by using a first compression method and compress the second pixel region by using a second compression method, so as to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method.
Optionally, the area determining module 11 is specifically configured to:
receiving a foreground object input aiming at the original image, and carrying out foreground object identification on the original image; or the like, or, alternatively,
scene semantic information related to the original image is obtained, semantic processing is carried out on the basis of the scene semantic information, and a foreground object corresponding to the original image is determined.
Optionally, the area determining module 11 is specifically configured to: :
the method comprises the steps of inputting an original image into a foreground object recognition model, and outputting a first pixel area and a second pixel area corresponding to the original image.
Optionally, the apparatus 1 is specifically configured to:
acquiring image sample data, and labeling a foreground object label on at least one sample image contained in the image sample data;
creating an initial neural network model, inputting the image sample data into the initial neural network model, and outputting an object identification result corresponding to the image sample data;
and obtaining the trained foreground object recognition model based on the object recognition result and/or the foreground object label to the initial neural network model.
Optionally, the apparatus 1 is specifically configured to:
determining a reference foreground object corresponding to at least one sample image included in the image sample data;
and marking the pixel points of the reference foreground object in the sample image by adopting a target channel marking mode, and generating a foreground object label corresponding to the sample image.
Optionally, as shown in fig. 4, the region compressing module 12 includes:
a special effect determining unit 121 configured to determine a background special effect matched to the first pixel region;
a region compressing unit 122, configured to compress the second pixel region based on the background special effect and the second compression manner.
Optionally, as shown in fig. 5, the special effect determining unit 121 includes:
a scene determining subunit 1211, configured to perform image semantic recognition on the original image, and determine an image scene corresponding to the first pixel region;
a special effect obtaining subunit 1212, configured to obtain a background special effect matched with the image scene.
Optionally, as shown in fig. 6, the region compressing unit 122 includes:
a compression rate determining subunit 1221, configured to determine at least one pixel sub-region from the second pixel region based on the background special effect, and determine a reference compression rate of each pixel sub-region;
a region compressing subunit 1222, configured to add a background special effect to the second pixel region, and compress each of the pixel sub-regions by a second compression method based on the reference compression rate.
Optionally, the compression ratio determining subunit 1221 is specifically configured to:
acquiring compression configuration information corresponding to the background special effect;
extracting image object features corresponding to the second pixel region, and determining at least one pixel sub-region from the second pixel region based on the compression configuration information and the image object features, wherein the region division type of each pixel sub-region is different;
and acquiring a reference compression rate corresponding to each pixel sub-region from the compression configuration information based on the region division type corresponding to the at least one pixel sub-region respectively.
It should be noted that, when the image compression apparatus provided in the foregoing embodiment executes the image compression method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the image compression apparatus and the image compression method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to as method embodiments, which are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, a terminal identifies a foreground object in an original image, and determines a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to the foreground object, and the second pixel area is a pixel area except the first pixel area in the original image; compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method. The method has the advantages that the foreground object of the original image is identified, the first pixel area and the second pixel area are further determined, compression modes with different image compression ratios are adopted for compression respectively, the problem that the compression ratio is low when all areas of the whole image are compressed according to the unified compression ratio is avoided, the image compression ratio can be improved while the image display effect is considered, and image application scenes such as image transmission, storage and sending are facilitated; and has better image visual effect than different areas with the same compression ratio.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the image compression method according to the embodiment shown in fig. 1 to fig. 2, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to fig. 2, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded by the processor and executed by the image compression method according to the embodiment shown in fig. 1 to fig. 2, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to fig. 2, which is not described herein again.
Referring to fig. 7, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 interfaces with various components throughout the electronic device using various interfaces and lines to perform various functions of the electronic device 100 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as a phonebook, audio-visual data, chat log data, and the like.
Referring to fig. 8, the memory 120 may be divided into an operating system space, where an operating system is run, and a user space, where native and third-party applications are run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources to the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in an animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 9, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries capable of allowing developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, an image compression program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 10, and the IOS system includes: a Core operating system Layer 420 (Core OS Layer), a Core Services Layer 440 (Core Services Layer), a Media Layer 460 (Media Layer), and a touchable Layer 480 (Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 10, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving a touch operation of a user on or near the touch display screens by using a finger, a touch pen or any other suitable object, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed as a combination of a full-screen and a curved-surface screen, and a combination of a special-shaped screen and a curved-surface screen, which is not limited in this application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, videos, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 7, which may be a terminal, the processor 110 may be configured to invoke the image compression application stored in the memory 120 and specifically perform the following operations:
performing foreground object identification on an original image, and determining a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area except the first pixel area in the original image;
compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method.
In an embodiment, when performing the foreground object recognition on the original image, the processor 110 specifically performs the following operations:
receiving a foreground object input aiming at the original image, and carrying out foreground object identification on the original image; or the like, or, alternatively,
scene semantic information related to the original image is obtained, semantic processing is carried out on the basis of the scene semantic information, and a foreground object corresponding to the original image is determined.
In an embodiment, when performing foreground object recognition on an original image and determining a first pixel region and a second pixel region corresponding to a foreground object in the original image, the processor 110 specifically performs the following operations:
the method comprises the steps of inputting an original image into a foreground object recognition model, and outputting a first pixel area and a second pixel area corresponding to the original image.
In one embodiment, the processor 110 specifically performs the following operations when executing the image compression method:
acquiring image sample data, and labeling a foreground object label on at least one sample image contained in the image sample data;
creating an initial neural network model, inputting the image sample data into the initial neural network model, and outputting an object identification result corresponding to the image sample data;
and obtaining the trained foreground object recognition model based on the object recognition result and/or the foreground object label to the initial neural network model.
In one embodiment, when the processor 110 performs the labeling of the foreground object label on at least one sample image included in the image sample data, the following operations are specifically performed:
determining a reference foreground object corresponding to at least one sample image included in the image sample data;
and marking the pixel points of the reference foreground object in the sample image by adopting a target channel marking mode, and generating a foreground object label corresponding to the sample image.
In an embodiment, when the processor 110 performs the compression on the second pixel region by using the second compression method, the following operations are specifically performed:
and determining a background special effect matched with the first pixel area, and compressing the second pixel area based on the background special effect and the second compression mode.
In one embodiment, the processor 110 specifically performs the following operations when performing the determining of the background special effect matching with the first pixel region:
performing image semantic recognition on the original image, and determining an image scene corresponding to the first pixel area;
and acquiring a background special effect matched with the image scene.
In an embodiment, when the processor 110 performs the compression on the second pixel region based on the background special effect and the second compression manner, the following operations are specifically performed:
determining at least one pixel sub-region from the second pixel region based on the background special effect, and determining a reference compression rate of each pixel sub-region;
adding a background special effect to the second pixel area, and compressing each pixel sub-area by adopting a second compression mode based on the reference compression rate.
In one embodiment, when the processor 110 determines at least one pixel sub-region from the second pixel region based on the background special effect, and determines a reference compression rate of each pixel sub-region, the following operations are specifically performed:
acquiring compression configuration information corresponding to the background special effect;
extracting image object features corresponding to the second pixel region, and determining at least one pixel sub-region from the second pixel region based on the compression configuration information and the image object features, wherein the region division type of each pixel sub-region is different;
and acquiring a reference compression rate corresponding to each pixel sub-region from the compression configuration information based on the region division type corresponding to the at least one pixel sub-region respectively.
In the embodiment of the application, a terminal identifies a foreground object of an original image, and determines a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to the foreground object, and the second pixel area is a pixel area except the first pixel area in the original image; compressing the first pixel area by adopting a first compression mode and compressing the second pixel area by adopting a second compression mode to generate a compressed image corresponding to the original image; wherein an image compression rate of the second compression method is lower than an image compression rate of the first compression method. The method comprises the steps of identifying a foreground object of an original image, further determining a first pixel area and a second pixel area, and respectively compressing by adopting compression modes with different image compression rates, so that the problem of low compression rate when all areas of the whole image are compressed according to a uniform compression rate is solved, the image compression rate can be improved while the image display effect is considered, and image application scenes such as image transmission, storage, sending and the like are facilitated; and the same compression rate is used for different areas, so that the image visual effect is better.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The term "unit" and "module" in this specification refers to software and/or hardware capable of performing a specific function independently or in cooperation with other components, wherein the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, can be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is merely an exemplary embodiment of the present disclosure, and the scope of the present disclosure is not limited thereto. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. An image processing method, applied to a terminal, the method comprising:
performing foreground object identification on an original image, and determining a first pixel area and a second pixel area in the original image, wherein the first pixel area is a first pixel area corresponding to a foreground object, and the second pixel area is a pixel area except the first pixel area in the original image;
compressing the first pixel region by adopting a first compression mode, determining a background special effect matched with the first pixel region, compressing the second pixel region based on the background special effect and a second compression mode, and generating a compressed image corresponding to the original image; the image precision of the first pixel area compressed by the image compression rate of the first compression mode is higher than that of the second pixel area compressed by the image compression rate of the second compression mode;
the compressing the second pixel region based on the background special effect and a second compression mode includes:
acquiring compression configuration information corresponding to the background special effect, extracting image object features corresponding to the second pixel region, and determining at least one pixel sub-region from the second pixel region based on the compression configuration information and the image object features, wherein the region division types of the pixel sub-regions are different;
acquiring a reference compression rate corresponding to each pixel sub-region from the compression configuration information based on the region division type corresponding to the at least one pixel sub-region;
adding a background special effect to the second pixel area, and compressing each pixel sub-area by adopting a second compression mode based on the reference compression rate.
2. The method according to claim 1, wherein performing foreground object recognition on the original image comprises:
receiving a foreground object input aiming at the original image, and carrying out foreground object identification on the original image; or the like, or, alternatively,
scene semantic information related to the original image is obtained, semantic processing is carried out on the basis of the scene semantic information, and a foreground object corresponding to the original image is determined.
3. The method according to claim 1, wherein the performing foreground object recognition on the original image and determining a first pixel region and a second pixel region corresponding to the foreground object in the original image comprises:
the method comprises the steps of inputting an original image into a foreground object recognition model, and outputting a first pixel area and a second pixel area corresponding to the original image.
4. The method of claim 3, further comprising:
acquiring image sample data, and labeling a foreground object label on at least one sample image contained in the image sample data;
creating an initial neural network model, inputting the image sample data into the initial neural network model, and outputting an object identification result corresponding to the image sample data;
and obtaining the trained foreground object recognition model based on the object recognition result and/or the foreground object label to the initial neural network model.
5. The method of claim 4, wherein said labeling at least one sample image included in the image sample data with a foreground object label comprises:
determining a reference foreground object corresponding to at least one sample image included in the image sample data;
and marking the pixel points of the reference foreground object in the sample image by adopting a target channel marking mode, and generating a foreground object label corresponding to the sample image.
6. The method of claim 1, wherein determining the background special effect that matches the first pixel region comprises:
performing image semantic recognition on the original image, and determining an image scene corresponding to the first pixel area;
and acquiring a background special effect matched with the image scene.
7. A computer storage medium storing instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1 to 6.
8. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to carry out the method steps according to any one of claims 1 to 6.
CN202011547685.3A 2020-12-23 2020-12-23 Image compression method, image compression device, storage medium and electronic equipment Active CN112839223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011547685.3A CN112839223B (en) 2020-12-23 2020-12-23 Image compression method, image compression device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011547685.3A CN112839223B (en) 2020-12-23 2020-12-23 Image compression method, image compression device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112839223A CN112839223A (en) 2021-05-25
CN112839223B true CN112839223B (en) 2022-12-20

Family

ID=75924286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011547685.3A Active CN112839223B (en) 2020-12-23 2020-12-23 Image compression method, image compression device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112839223B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596450B (en) * 2021-06-28 2022-11-11 展讯通信(上海)有限公司 Video image compression method, decompression method, processing method, device and equipment
CN113470127B (en) * 2021-09-06 2021-11-26 成都国星宇航科技有限公司 Optical image effective compression method based on satellite-borne cloud detection
CN114627143A (en) * 2021-10-12 2022-06-14 深圳宏芯宇电子股份有限公司 Image processing method and device, terminal equipment and readable storage medium
CN116610646B (en) * 2023-07-20 2024-04-02 深圳市其域创新科技有限公司 Data compression method, device, equipment and computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4732488B2 (en) * 2008-06-24 2011-07-27 シャープ株式会社 Image processing apparatus, image forming apparatus, image reading apparatus, image processing method, image processing program, and computer-readable recording medium
CN103108197A (en) * 2011-11-14 2013-05-15 辉达公司 Priority level compression method and priority level compression system for three-dimensional (3D) video wireless display
CN108010037B (en) * 2017-11-29 2019-09-13 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN109903291B (en) * 2017-12-11 2021-06-01 腾讯科技(深圳)有限公司 Image processing method and related device
CN110009556A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background weakening method, device, storage medium and electronic equipment
CN110267041B (en) * 2019-06-28 2021-11-09 Oppo广东移动通信有限公司 Image encoding method, image encoding device, electronic device, and computer-readable storage medium
CN111598824A (en) * 2020-06-04 2020-08-28 上海商汤智能科技有限公司 Scene image processing method and device, AR device and storage medium

Also Published As

Publication number Publication date
CN112839223A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
CN111476871B (en) Method and device for generating video
US20200244865A1 (en) Method for capturing images, terminal, and storage medium
CN109993150B (en) Method and device for identifying age
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN111275784B (en) Method and device for generating image
CN109189544B (en) Method and device for generating dial plate
US20220301328A1 (en) Text recognition method and apparatus
CN110059623B (en) Method and apparatus for generating information
CN111767554A (en) Screen sharing method and device, storage medium and electronic equipment
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN114330236A (en) Character generation method and device, electronic equipment and storage medium
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN111984803B (en) Multimedia resource processing method and device, computer equipment and storage medium
CN112990176A (en) Writing quality evaluation method and device and electronic equipment
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN110046571B (en) Method and device for identifying age
CN114285936A (en) Screen brightness adjusting method and device, storage medium and terminal
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant