CN110070495B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110070495B
CN110070495B CN201910124386.XA CN201910124386A CN110070495B CN 110070495 B CN110070495 B CN 110070495B CN 201910124386 A CN201910124386 A CN 201910124386A CN 110070495 B CN110070495 B CN 110070495B
Authority
CN
China
Prior art keywords
image
gray
pixel
map
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910124386.XA
Other languages
Chinese (zh)
Other versions
CN110070495A (en
Inventor
周景锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910124386.XA priority Critical patent/CN110070495B/en
Publication of CN110070495A publication Critical patent/CN110070495A/en
Application granted granted Critical
Publication of CN110070495B publication Critical patent/CN110070495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The disclosure discloses an image processing method and device and an electronic device. The image processing method comprises the following steps: acquiring an original image; performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image; calculating a gray scale map of the original image and a gray scale map of the first image; calculating an inverse graph of a gray scale map of the first image; and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image. According to the image processing method, the original image is processed, the image with the pencil drawing style is obtained according to the processed gray-scale image of the image and the gray-scale image of the original image in a mixed mode, and the technical problem that effects and performances of a filter with the pencil drawing style cannot be considered at the same time in the prior art is solved.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for processing an image, and an electronic device.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, various filter effects can be realized. Typically, such as a filter that can convert the image into a pencil-drawing style.
However, the filter with the pencil drawing style in the prior art cannot give consideration to both effects and performances, or only can perform pencil drawing style processing on a static picture, or can perform real-time pencil drawing style processing on a camera video picture, but the processing speed is low, so that the effects are poor. In the prior art, a filter with a pencil drawing style can be made by using a deep learning image algorithm, but a strong GPU is required as a support, and even on a background server equipped with a high-performance GPU, a real-time processing frame rate is difficult to achieve. In addition, the deep learning based algorithm requires a large amount of training data to be prepared in advance, which consumes a lot of time and labor.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
acquiring an original image;
performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image;
calculating a gray scale map of the original image and a gray scale map of the first image;
calculating an inverse graph of a gray scale map of the first image;
and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image.
Further, the performing the first processing on the original image to obtain a first image includes:
acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1;
calculating a maximum color value of the color values of K x K pixels in each K x K neighborhood;
and assigning the maximum color value in the K × K neighborhood to the pixel corresponding to the K × K neighborhood to obtain the first image.
Further, the performing the first processing on the original image to obtain a first image includes:
acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1;
calculating an average of color values of K x K pixels in each of the K x K neighbors;
and assigning the average value of the color values of the pixels in the K-K neighborhood to the pixels corresponding to the K-K neighborhood to obtain the first image.
Further, the calculating the gray-scale map of the original image and the gray-scale map of the first image includes:
calculating a first average value of RGB color three components of each pixel of the original image, and taking the first average value as a gray value of each pixel in a gray map of the original image;
and calculating a second average value of the RGB three components of each pixel of the first image, and taking the second average value as the gray value of each pixel in the gray map of the first image.
Further, the calculating an inverse graph of the gray-scale map of the first image includes:
acquiring a gray value G1 of a pixel of a gray map of the first image;
the inverse of the gray map of the first image is calculated by G2-255-G1.
Further, the calculating the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image includes:
acquiring a first gray value of a pixel of a gray image of the original image;
acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image;
calculating a ratio of the first gray value to the second gray value;
and calculating the gray value of the pixel in the second image according to the ratio to obtain a second image.
Further, after the calculating the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image, the method further includes:
enhancing the contrast of the second image to obtain a third image.
Further, the enhancing the contrast of the second image to obtain a third image includes:
according to the formula: dstij=(srcij-center) ratio + center calculationGray values of pixels of three images, wherein dstijIs the gray value of the pixel of the third image, center is the central gray value, center is more than or equal to 0 and less than or equal to 1, ratio is the contrast parameter, ratio is more than 0, srcijIs the grey value of the pixels of the second image.
Further, after the calculating the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image, the method further includes:
acquiring a pencil texture map;
blending the pencil texture map with the second image to obtain a fourth image.
Further, after the calculating the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image, the method further includes:
converting the original image from an RGB color space to an LAB color space;
converting the second image from an RGB color space to an LAB color space;
respectively assigning the A channel component and the B channel component of the original image in the LAB color space to the A channel component and the B channel component of the second image in the LAB color space to obtain a fifth image;
converting the fifth image from the LAB color space to the RGB color space results in a sixth image.
According to another aspect of the present disclosure, the following technical solutions are also provided:
an apparatus for processing an image, comprising:
the original image acquisition module is used for acquiring an original image;
the first processing module is used for performing first processing on the original image to obtain a first image, and the color value of the pixel in the first image is calculated from the color value of the pixel in the original image;
the gray-scale map calculation module is used for calculating a gray-scale map of the original image and a gray-scale map of the first image;
the inverse graph calculation module is used for calculating an inverse graph of the gray level image of the first image;
and the second image calculation module is used for calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image so as to obtain the second image.
Further, the first processing module further includes:
a first neighborhood acquiring module, configured to acquire each pixel in an original image and a K × K neighborhood corresponding to the each pixel, where each K × K neighborhood includes K × K pixels;
a maximum color value calculation module for calculating a maximum color value of the color values of K × K pixels in each K × K neighborhood;
and the first assignment module is used for assigning the maximum color value in the K × K neighborhood to the pixel corresponding to the K × K neighborhood so as to obtain the first image.
Further, the first processing module further includes:
a second neighborhood acquiring module, configured to acquire each pixel in an original image and a K × K neighborhood corresponding to the each pixel, where each K × K neighborhood includes K × K pixels;
a color average calculation module for calculating an average of color values of K × K pixels in each of the K × K neighborhoods;
and the second assignment module is used for assigning the average value of the color values of the pixels in the K × K neighborhood to the pixels corresponding to the K × K neighborhood so as to obtain the first image.
Further, the grayscale map calculation module further includes:
the original image gray scale map calculation module is used for calculating a first average value of RGB color three components of each pixel of the original image, and taking the first average value as a gray scale value of each pixel in the gray scale map of the original image;
and the gray map calculation module of the first image is used for calculating a second average value of RGB (red, green and blue) color three components of each pixel of the first image, and taking the second average value as the gray value of each pixel in the gray map of the first image.
Further, the inverse graph calculating module is further configured to: acquiring a gray value G of a pixel of a gray map of the first image; and calculating an inverse graph of the gray scale map of the first image by G-G.
Further, the second image calculation module further includes:
the first gray value acquisition module is used for acquiring a first gray value of a pixel of a gray image of the original image;
the second gray value acquisition module is used for acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image;
a ratio calculation module for calculating a ratio of the first gray value to the second gray value;
and the second image gray value calculating module is used for calculating the gray value of the pixel in the second image according to the ratio to obtain a second image.
Further, the image processing apparatus further includes:
and the contrast enhancement module is used for enhancing the contrast of the second image to obtain a third image.
Further, the contrast enhancement module is further configured to: according to the formula: dstij=(srcij-center) ratio + center calculates the grey values of the pixels of the third image, wherein dstijIs the gray value of the pixel of the third image, center is the central gray value, center is less than or equal to, ratio is the contrast parameter, ratio, srcijIs the grey value of the pixels of the second image.
Further, the image processing apparatus further includes:
the pencil texture map acquisition module is used for acquiring a pencil texture map;
and the mixing module is used for mixing the pencil texture map with the second image to obtain a fourth image.
Further, the image processing apparatus further includes:
a first conversion module for converting an original image from an RGB color space to an LAB color space;
a second conversion module for converting the second image from an RGB color space to an LAB color space;
and the third assignment module is used for assigning the A channel component and the B channel component of the original image in the LAB color space to the A channel component and the B channel component of the second image in the LAB color space respectively to obtain a fifth image.
And the third conversion module is used for converting the fifth image from an LAB color space to an RGB color space to obtain a sixth image.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any image processing method when executing.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The disclosure discloses an image processing method and device and an electronic device. The image processing method comprises the following steps: acquiring an original image; performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image; calculating a gray scale map of the original image and a gray scale map of the first image; calculating an inverse graph of a gray scale map of the first image; and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image. According to the image processing method, the original image is processed, the image with the pencil drawing style is obtained according to the processed gray-scale image of the image and the gray-scale image of the original image in a mixed mode, and the technical problem that effects and performances of a filter with the pencil drawing style cannot be considered at the same time in the prior art is solved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1 is a flow diagram of a method of processing an image according to one embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of a further method of processing an image according to one embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of a further method of processing an image according to one embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram of a further method of processing an image according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a method for processing an image. The image processing method provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrated in a server, a terminal device and the like. As shown in fig. 1, the image processing method mainly includes the following steps S101 to S105. Wherein:
step S101: acquiring an original image;
in this embodiment, the raw image may be acquired by an image sensor, which refers to various devices that can capture images, and typical image sensors are video cameras, still cameras, and the like. In this embodiment, the image sensor may be a camera on the terminal device, such as a front-facing or rear-facing camera on a smart phone, and an image acquired by the camera may be directly displayed on a display screen of the smart phone.
In an embodiment, the acquiring the original image may be acquiring a current image frame of a video currently captured by the terminal device, and since the video is composed of a plurality of image frames, in this embodiment, the video image is acquired, and a video frame image in the video image is taken as the original image.
In one embodiment, the acquiring the original image may be acquiring any form of image from a local storage device or a storage device pointed to by a network address, such as a static picture, a dynamic picture, or videos in various formats, and the like, which is not limited herein.
Step S102: performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image;
in a specific embodiment, the performing the first processing on the original image to obtain a first image includes: acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1; calculating a maximum color value of the color values of K x K pixels in each K x K neighborhood; and assigning the maximum color value in the K × K neighborhood to the pixel corresponding to the K × K neighborhood to obtain the first image. Optionally, K is 3, at this time, the neighborhood of each pixel is a 3 × 3 neighborhood, which is an example as follows, and the following is 3 × 3 neighborhoods of the pixel points with the color value of 126:
Figure RE-GDA0002020743660000091
taking a maximum color value in the neighborhood as the color value of the point, the neighborhood becomes:
Figure RE-GDA0002020743660000092
the color value of the pixel having the color value of 126 becomes 240. And performing the processing on each pixel point in the original image to obtain a first image. For pixels at the edge of the image, the neighborhood maximum color value of the edge pixel may be calculated by filling 0 outside the edge pixel, and then using the filled 0 as a part of the neighborhood of the edge pixel. It is understood that the padding value can be any value, and 0 herein is merely an example and does not constitute a limitation of the present disclosure. For the color values in the RGB space, the sum of the color values of the three components can be used as the comparison basis for comparing the maximum color values, and is not described herein again. Optionally, the process of obtaining the maximum color value of the neighborhood may be optimized, the maximum value of the neighborhood of each point is obtained in the X-axis direction of the image, and then the maximum value of the neighborhood of each point is obtained in the Y-axis direction; in addition, when the value of K is large, the original image may be reduced first, processed with a small K, and then the processed image may be enlarged to the original image by bilinear interpolation, which is not described herein again.
In a specific embodiment, the performing the first processing on the original image to obtain a first image includes: acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1; calculating an average of color values of K x K pixels in each of the K x K neighbors; and assigning the average value of the color values of the pixels in the K-K neighborhood to the pixels corresponding to the K-K neighborhood to obtain the first image.
Optionally, K is 3, at this time, the neighborhood of each pixel is a 3 × 3 neighborhood, which is an example as follows, and the following is 3 × 3 neighborhoods of the pixel points with the color value of 126:
Figure RE-GDA0002020743660000101
calculating the average of the colors in the neighborhood (0+233+240+125+126+90+87+199+ 200)/9 ═ 145, then the neighborhood becomes:
Figure RE-GDA0002020743660000102
the color value of the pixel having the color value of 126 becomes 145. And performing the processing on each pixel point in the original image to obtain a first image. For pixels at the edge of the image, the neighborhood maximum color value of the edge pixel may be calculated by filling 0 outside the edge pixel, and then using the filled 0 as a part of the neighborhood of the edge pixel. It is understood that the padding value can be any value, and 0 herein is merely an example and does not constitute a limitation of the present disclosure. The absolute average value is used in the above specific example, it can be understood that the average value may also be calculated by using a weighted average value, the weight may be set by using the distance between the pixel and the pixel in the neighborhood, and the weighted average value is calculated according to the weight and is used as the color value of the current pixel, which is not described herein again.
In this step, a first image is obtained by acquiring a neighborhood of each pixel in the original image and calculating a color value of each pixel according to a color value of the pixel in the neighborhood.
Step S103: calculating a gray scale map of the original image and a gray scale map of the first image;
in one embodiment, the calculating the gray scale map of the original image and the gray scale map of the first image includes: calculating a first average value of RGB color three components of each pixel of the original image, and taking the first average value as a gray value of each pixel in a gray map of the original image; and calculating a second average value of the RGB three components of each pixel of the first image, and taking the second average value as the gray value of each pixel in the gray map of the first image. Step S104: and processing the image to be processed according to the processing resource corresponding to the first audio attribute data. Specifically, if the color value of a pixel in the RGB space is (123,200,112), and the average value of the RGB three components is (123+200+112)/3, 145, the gray scale value of the pixel is (145,145,145), and the above operation is performed on each pixel in the image, so as to obtain the gray scale map of the image. It can be understood that there are many ways to calculate the gray scale map, typically, the maximum value in the RGB three components is taken as the gray scale value or the minimum value in the RGB three components is taken as the gray scale value, or the weighted average value is made for the RGB three components and taken as the gray scale value, which is not limited here and will not be described again.
Step S104: calculating an inverse graph of a gray scale map of the first image;
in a specific embodiment, the calculating an inverse graph of the gray-scale map of the first image includes: acquiring a gray value G1 of a pixel of a gray map of the first image; the inverse of the gray map of the first image is calculated by G2-255-G1. Specifically, the color value of a pixel in the grayscale map of the first image in the RGB space is (145,145,145), and the colors in the inverse map are (255-.
Step S105: and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image.
In a specific embodiment, the calculating the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image includes: acquiring a first gray value of a pixel of a gray image of the original image; acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image; calculating a ratio of the first gray value to the second gray value; and calculating the gray value of the pixel in the second image according to the ratio to obtain a second image. In this embodiment, the pixels of the inverse image of the gray-scale map of the first image and the pixels of the gray-scale map of the original image are corresponding pixels, that is, the relative positions of the two pixels in the images are the same. And (3) recording the gray level map of the original image as ButtomLayer, and recording the inverse map of the gray level map of the first image as TopLayer, so that the color values of the pixels in the second image are as follows:
Figure RE-GDA0002020743660000121
and (3) calculating the gray level image of the original image and the inverse image of the gray level image of the first image pixel by pixel through a formula (1) to obtain a second image. Specifically, if the gray scale value of one pixel on the gray scale map of the original image is 125, and the gray scale value of the corresponding pixel on the gray scale map of the corresponding first image is 50, the gray scale value of the corresponding pixel on the inverse gray scale map of the first image is 255-50 ═ 205. Calculating the gray value of the corresponding pixel on the second image as follows by using the formula (1):
Figure RE-GDA0002020743660000122
and carrying out pixel-by-pixel processing on the gray level image of the original image and the inverse image of the gray level image of the first image to obtain a second image, wherein the second image has a simple sketch outlining style.
Through the processing of the above-described steps S101 to S105, the original image can be converted into an image with a pencil sketch style. As shown in fig. 2, in order to make the pencil sketch style more effective, after step S105, the method may further include:
step S201, enhancing the contrast of the second image to obtain a third image.
In a specific embodiment, the enhancing the contrast of the second image to obtain a third image includes:
according to the formula: dstij=(srcijCenter ratio + center formula (2)
Calculating gray values of pixels of the third image, wherein dstijIs the gray value of the pixel of the third image, center is the central gray value, center is more than or equal to 0 and less than or equal to 1, ratio is the contrast parameter, ratio is more than 0, srcijBeing a second imageThe grey value of the pixel. In this embodiment, the gray-level value is the normalized gray-level value, and the center value is used as the center value to stretch the gray-level value of the pixel, so as to achieve the effect of enhancing the contrast. Optionally, the center is 0.9, and the ratio is 1.5.
As shown in fig. 3, in order to make the pencil sketch style more effective, after step S105, the method may further include:
step S301, acquiring a pencil texture map;
step S302, mixing the pencil texture map with the second image to obtain a fourth image.
The pencil texture map may be a texture map of a hand-drawn pencil map, such as a pencil scratch inclined at a certain angle. In step S302, the pencil texture map and the second image are mixed, which may be a simple equal proportion mixture, for example, the gray value of the pixel in the pencil texture map and the gray value in the second image each account for 0.5, and a new gray value is calculated as the gray value of the fourth image. In a specific embodiment, two different pencil texture maps, which are respectively marked as Mask1 and Mask2, may be provided, and optionally, the gray values of the two texture maps are one deep and one light, and the directions of the textures are opposite. If the second image is taken as Sketch and the Gray scale image of the original image is taken as Gray, the Gray scale value of the fourth image can be calculated by the following formula (3):
Figure RE-GDA0002020743660000131
wherein R2 is the gray scale value of the pixel on the fourth image after synthesis, when the gray scale value of the pixel in the gray scale image of the original image is [0, 0.1], the gray scale value of the pixel in the fourth image is calculated by Sketch Mask1 Mask 2; the gray values of the pixels in the fourth image are calculated by Sketch Mask1 Mask2 when the gray values of the pixels in the gray map of the original are (0.1, 0.33), the gray values of the pixels in the fourth image are calculated by Sketch Mask2 when the gray values of the pixels in the gray map of the original are (0.33, 0.5), the gray values of Sketch are taken as the gray values of the pixels in the fourth image when the gray values of the pixels in the gray map of the original are (0.5, 1), three thresholds 0.1, 0.33 and 0.5 in the above formula are empirical values which can be set to other values according to the required effect in actual use without being limited to the three values, in this step, different image blending modes are determined according to the gray values of the original images to blend the second image and the pencil image to form a pencil image with texture, which the realistic pencil effect is greatly enhanced, steps S301 and S302 may also be executed after step S201, in which case, the third image may be mixed with the pencil texture map, and specifically, the second image in the above formula (3) may be replaced by the third image, which is not described herein again.
Whether the second image, the third image or the fourth image is a gray scale image, that is, a black lead sketch is obtained through the above steps. As shown in fig. 4, after step S105, to obtain an image with a colored-lead style, the method may further include:
step S401, converting an original image from an RGB color space to an LAB color space;
step S402, converting the second image from RGB color space to LAB color space;
step S403, respectively assigning the A channel component and the B channel component of the original image in the LAB color space to the A channel component and the B channel component of the second image in the LAB color space to obtain a fifth image;
step S404, converting the fifth image from the LAB color space to the RGB color space to obtain a sixth image.
Through the above steps, the color of the original image in the LAB space can be assigned to the A, B channel of the pencil image in the LAB space, and then the pencil image is converted from the LAB space to the RGB space again to obtain the color pencil image. It is understood that the steps in fig. 4 may also be executed after step S201 or step S302, in which case, the second image only needs to be replaced by the third image or the fourth image, and the corresponding color-lead map can be obtained.
The disclosure discloses an image processing method and device and an electronic device. The image processing method comprises the following steps: acquiring an original image; performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image; calculating a gray scale map of the original image and a gray scale map of the first image; calculating an inverse graph of a gray scale map of the first image; and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image. According to the image processing method, the original image is processed, the image with the pencil drawing style is obtained according to the processed gray-scale image of the image and the gray-scale image of the original image in a mixed mode, and the technical problem that effects and performances of a filter with the pencil drawing style cannot be considered at the same time in the prior art is solved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides an image processing device. The apparatus may perform the steps described in the above-described embodiment of the image processing method. As shown in fig. 5, the apparatus 500 mainly includes: the system comprises an original image acquisition module 501, a first processing module 502, a gray-scale map calculation module 503, a reverse map calculation module 504 and a second image calculation module 505. Wherein the content of the first and second substances,
an original image obtaining module 501, configured to obtain an original image;
a first processing module 502, configured to perform a first processing on the original image to obtain a first image, where a color value of a pixel in the first image is calculated from a color value of a pixel in the original image;
a grayscale map calculation module 503, configured to calculate a grayscale map of the original image and a grayscale map of the first image;
a reverse graph calculation module 504, configured to calculate a reverse graph of the grayscale map of the first image;
and the second image calculating module 505 is configured to calculate the gray scale value of the pixel in the second image according to the gray scale map of the original image and the inverse map of the gray scale map of the first image to obtain the second image.
Further, the first processing module 502 further includes:
a first neighborhood acquiring module, configured to acquire each pixel in an original image and a K × K neighborhood corresponding to the each pixel, where each K × K neighborhood includes K × K pixels;
a maximum color value calculation module for calculating a maximum color value of the color values of K × K pixels in each K × K neighborhood;
and the first assignment module is used for assigning the maximum color value in the K × K neighborhood to the pixel corresponding to the K × K neighborhood so as to obtain the first image.
Further, the first processing module 502 further includes:
a second neighborhood acquiring module, configured to acquire each pixel in an original image and a K × K neighborhood corresponding to the each pixel, where each K × K neighborhood includes K × K pixels;
a color average calculation module for calculating an average of color values of K × K pixels in each of the K × K neighborhoods;
and the second assignment module is used for assigning the average value of the color values of the pixels in the K × K neighborhood to the pixels corresponding to the K × K neighborhood so as to obtain the first image.
Further, the grayscale map calculation module 503 further includes:
the original image gray scale map calculation module is used for calculating a first average value of RGB color three components of each pixel of the original image, and taking the first average value as a gray scale value of each pixel in the gray scale map of the original image;
and the gray map calculation module of the first image is used for calculating a second average value of RGB (red, green and blue) color three components of each pixel of the first image, and taking the second average value as the gray value of each pixel in the gray map of the first image.
Further, the inverse graph calculating module 504 is further configured to: acquiring a gray value G1 of a pixel of a gray map of the first image; the inverse of the gray map of the first image is calculated by G2-255-G1.
Further, the second image calculation module 505 further includes:
the first gray value acquisition module is used for acquiring a first gray value of a pixel of a gray image of the original image;
the second gray value acquisition module is used for acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image;
a ratio calculation module for calculating a ratio of the first gray value to the second gray value;
and the second image gray value calculating module is used for calculating the gray value of the pixel in the second image according to the ratio to obtain a second image.
Further, the apparatus 500 for processing an image further includes:
a contrast enhancement module 506, configured to enhance the contrast of the second image to obtain a third image.
Further, the contrast enhancement module 506 is further configured to: according to the formula: dstij= (srcij-center) ratio + center calculates the grey values of the pixels of the third image, wherein dstijIs the gray value of the pixel of the third image, center is the central gray value, center is more than or equal to 0 and less than or equal to 1, ratio is the contrast parameter, ratio is more than 0, srcijIs the grey value of the pixels of the second image.
Further, the apparatus 500 for processing an image further includes:
a pencil texture map obtaining module 507, configured to obtain a pencil texture map;
a blending module 508, configured to blend the pencil texture map with the second image to obtain a fourth image.
Further, the apparatus 500 for processing an image further includes:
a first conversion module 509 for converting the original image from an RGB color space to an LAB color space;
a second conversion module 510 for converting the second image from an RGB color space to an LAB color space;
the third assigning module 511 is configured to assign the a channel component and the B channel component of the original image in the LAB color space to the a channel component and the B channel component of the second image in the LAB color space, respectively, to obtain a fifth image.
A third converting module 512, configured to convert the fifth image from an LAB color space to an RGB color space to obtain a sixth image.
The apparatus shown in fig. 5 can perform the method of the embodiments shown in fig. 1, fig. 2, fig. 3 and fig. 4, and the detailed description of the embodiment may refer to the related description of the embodiments shown in fig. 1, fig. 2, fig. 3 and fig. 4. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1, fig. 2, fig. 3, and fig. 4, which are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an original image; performing first processing on the original image to obtain a first image, wherein the color value of a pixel in the first image is calculated from the color value of the pixel in the original image; calculating a gray scale map of the original image and a gray scale map of the first image; calculating an inverse graph of a gray scale map of the first image; and calculating the gray value of the pixel in the second image according to the gray map of the original image and the inverse map of the gray map of the first image to obtain the second image.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method of processing an image, comprising:
acquiring an original image;
acquiring a neighborhood of each pixel in an original image, and calculating a color value of each pixel according to the color value of the pixel in the neighborhood of each pixel to obtain a first image;
calculating a gray scale map of the original image and a gray scale map of the first image;
calculating an inverse graph of a gray scale map of the first image;
calculating the gray level image of the original image and the reverse image of the gray level image of the first image pixel by pixel to obtain a second image; wherein the calculating the gray-scale image of the original image and the inverse image of the gray-scale image of the first image pixel by pixel to obtain a second image comprises:
acquiring a first gray value of a pixel of a gray image of the original image;
acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image;
calculating a ratio of the first gray value to the second gray value;
the ratio is multiplied by 256 to calculate the gray value of the pixel to obtain a second image.
2. The method of processing an image according to claim 1, wherein the obtaining a neighborhood of each pixel in the original image, and calculating a color value of each pixel according to the color value of the pixel in the neighborhood of each pixel to obtain the first image comprises:
acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1;
calculating a maximum color value of the color values of K x K pixels in each K x K neighborhood;
and assigning the maximum color value in the K × K neighborhood to the pixel corresponding to the K × K neighborhood to obtain the first image.
3. The method of processing an image according to claim 1, wherein the obtaining a neighborhood of each pixel in the original image, and calculating a color value of each pixel according to the color value of the pixel in the neighborhood of each pixel to obtain the first image comprises:
acquiring each pixel in an original image and a K × K neighborhood corresponding to each pixel, wherein each K × K neighborhood comprises K × K pixels, and K is an integer greater than 1;
calculating an average of color values of K x K pixels in each of the K x K neighbors;
and assigning the average value of the color values of the pixels in the K-K neighborhood to the pixels corresponding to the K-K neighborhood to obtain the first image.
4. The method for processing the image according to claim 1, wherein said calculating the gray map of the original image and the gray map of the first image comprises:
calculating a first average value of RGB color three components of each pixel of the original image, and taking the first average value as a gray value of each pixel in a gray map of the original image;
and calculating a second average value of the RGB three components of each pixel of the first image, and taking the second average value as the gray value of each pixel in the gray map of the first image.
5. The method for processing the image according to claim 1, wherein said calculating an inverse of the gray-scale map of the first image comprises:
acquiring a gray value G1 of a pixel of a gray map of the first image;
the inverse of the gray map of the first image is calculated by G2-255-G1.
6. The image processing method according to claim 1, wherein after calculating the gray-scale values of the pixels in the second image according to the gray-scale map of the original image and the inverse map of the gray-scale map of the first image to obtain the second image, the method further comprises:
enhancing the contrast of the second image to obtain a third image.
7. The method for processing the image according to claim 6, wherein said enhancing the contrast of the second image to obtain a third image comprises:
according to the formula: dstij=(srcij-center) ratio + center calculates the grey values of the pixels of the third image, wherein dstijIs the gray value of the pixel of the third image, center is the central gray value, center is more than or equal to 0 and less than or equal to 1, ratio is the contrast parameter, ratio is more than 0, srcijIs the grey value of the pixels of the second image.
8. The image processing method according to claim 1, wherein after calculating the gray-scale values of the pixels in the second image according to the gray-scale map of the original image and the inverse map of the gray-scale map of the first image to obtain the second image, the method further comprises:
acquiring a pencil texture map;
blending the pencil texture map with the second image to obtain a fourth image.
9. The image processing method according to claim 1, wherein after calculating the gray-scale values of the pixels in the second image according to the gray-scale map of the original image and the inverse map of the gray-scale map of the first image to obtain the second image, the method further comprises:
converting the original image from an RGB color space to an LAB color space;
converting the second image from an RGB color space to an LAB color space;
respectively assigning the A channel component and the B channel component of the original image in the LAB color space to the A channel component and the B channel component of the second image in the LAB color space to obtain a fifth image;
converting the fifth image from the LAB color space to the RGB color space results in a sixth image.
10. An apparatus for processing an image, comprising:
the original image acquisition module is used for acquiring an original image;
the first processing module is used for acquiring the neighborhood of each pixel in the original image, and obtaining a first image by calculating the color value of each pixel according to the color value of the pixel in the neighborhood of each pixel;
the gray-scale map calculation module is used for calculating a gray-scale map of the original image and a gray-scale map of the first image;
the inverse graph calculation module is used for calculating an inverse graph of the gray level image of the first image;
the second image calculation module is used for calculating the gray level image of the original image and the reverse image of the gray level image of the first image pixel by pixel to obtain a second image; wherein the calculating the gray-scale image of the original image and the inverse image of the gray-scale image of the first image pixel by pixel to obtain a second image comprises:
acquiring a first gray value of a pixel of a gray image of the original image;
acquiring a second gray value of a pixel of an inverse graph of the gray map of the first image;
calculating a ratio of the first gray value to the second gray value;
the ratio is multiplied by 256 to calculate the gray value of the pixel to obtain a second image.
11. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the method of processing an image according to any of claims 1-9.
12. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the method of processing an image of any one of claims 1-9.
CN201910124386.XA 2019-02-20 2019-02-20 Image processing method and device and electronic equipment Active CN110070495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910124386.XA CN110070495B (en) 2019-02-20 2019-02-20 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910124386.XA CN110070495B (en) 2019-02-20 2019-02-20 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110070495A CN110070495A (en) 2019-07-30
CN110070495B true CN110070495B (en) 2021-09-17

Family

ID=67365992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910124386.XA Active CN110070495B (en) 2019-02-20 2019-02-20 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110070495B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555799A (en) * 2019-09-26 2019-12-10 北京百度网讯科技有限公司 Method and apparatus for processing video
CN110599437A (en) * 2019-09-26 2019-12-20 北京百度网讯科技有限公司 Method and apparatus for processing video
CN111754425A (en) * 2020-06-05 2020-10-09 北京有竹居网络技术有限公司 Image highlight removing processing method and device and electronic equipment
CN113052726B (en) * 2021-03-24 2021-10-15 广西凯合置业集团有限公司 Smart community property service system based on cloud computing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007139836A2 (en) * 2006-05-24 2007-12-06 Scan-Optics, Llc Optical mark reader
CN102572219A (en) * 2012-01-19 2012-07-11 西安联客信息技术有限公司 Mobile terminal and image processing method thereof
CN103093437A (en) * 2013-01-30 2013-05-08 深圳深讯和科技有限公司 Method and device for generating pencil drawing style image
CN103685858A (en) * 2012-08-31 2014-03-26 北京三星通信技术研究有限公司 Real-time video processing method and equipment
CN103914862A (en) * 2014-03-10 2014-07-09 上海大学 Pencil sketch simulating method based on edge tangent stream
CN105528765A (en) * 2015-12-02 2016-04-27 小米科技有限责任公司 Method and device for processing image
CN106846390A (en) * 2017-02-27 2017-06-13 迈吉客科技(北京)有限公司 A kind of method and device of image procossing
CN104915975B (en) * 2015-06-03 2018-08-10 厦门美图之家科技有限公司 A kind of image processing method and system of simulation wax crayon colored drawing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007139836A2 (en) * 2006-05-24 2007-12-06 Scan-Optics, Llc Optical mark reader
CN102572219A (en) * 2012-01-19 2012-07-11 西安联客信息技术有限公司 Mobile terminal and image processing method thereof
CN103685858A (en) * 2012-08-31 2014-03-26 北京三星通信技术研究有限公司 Real-time video processing method and equipment
CN103093437A (en) * 2013-01-30 2013-05-08 深圳深讯和科技有限公司 Method and device for generating pencil drawing style image
CN103914862A (en) * 2014-03-10 2014-07-09 上海大学 Pencil sketch simulating method based on edge tangent stream
CN104915975B (en) * 2015-06-03 2018-08-10 厦门美图之家科技有限公司 A kind of image processing method and system of simulation wax crayon colored drawing
CN105528765A (en) * 2015-12-02 2016-04-27 小米科技有限责任公司 Method and device for processing image
CN106846390A (en) * 2017-02-27 2017-06-13 迈吉客科技(北京)有限公司 A kind of method and device of image procossing

Also Published As

Publication number Publication date
CN110070495A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110070495B (en) Image processing method and device and electronic equipment
CN110288551B (en) Video beautifying method and device and electronic equipment
CN110070551B (en) Video image rendering method and device and electronic equipment
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN110288520B (en) Image beautifying method and device and electronic equipment
CN110070515B (en) Image synthesis method, apparatus and computer-readable storage medium
CN111258519B (en) Screen split implementation method, device, terminal and medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN110796664A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2022247630A1 (en) Image processing method and apparatus, electronic device and storage medium
CN111626921A (en) Picture processing method and device and electronic equipment
CN112819691B (en) Image processing method, device, equipment and readable storage medium
CN110070482B (en) Image processing method, apparatus and computer readable storage medium
CN110264430B (en) Video beautifying method and device and electronic equipment
CN111223105B (en) Image processing method and device
CN111292247A (en) Image processing method and device
CN111200705B (en) Image processing method and device
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN114170341A (en) Image processing method, device, equipment and medium
CN111292276B (en) Image processing method and device
CN111292245A (en) Image processing method and device
WO2021031846A1 (en) Water ripple effect implementing method and apparatus, electronic device, and computer readable storage medium
CN110288554B (en) Video beautifying method and device and electronic equipment
CN111738958B (en) Picture restoration method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.