CN106558040B - Character image treating method and apparatus - Google Patents

Character image treating method and apparatus Download PDF

Info

Publication number
CN106558040B
CN106558040B CN201510613580.6A CN201510613580A CN106558040B CN 106558040 B CN106558040 B CN 106558040B CN 201510613580 A CN201510613580 A CN 201510613580A CN 106558040 B CN106558040 B CN 106558040B
Authority
CN
China
Prior art keywords
processed
human body
pixel
body area
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510613580.6A
Other languages
Chinese (zh)
Other versions
CN106558040A (en
Inventor
谭国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510613580.6A priority Critical patent/CN106558040B/en
Publication of CN106558040A publication Critical patent/CN106558040A/en
Application granted granted Critical
Publication of CN106558040B publication Critical patent/CN106558040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of character image treating method and apparatus.It the described method comprises the following steps: obtaining image;The face location in described image is detected, and marks the parameter of the face;According to the parameter of human body area to be processed in the parameter estimation described image of the face;Human body area to be processed is determined according to the parameter of human body area to be processed;Obtain the width at human body position to be processed in human body area to be processed;Processing is zoomed in and out to the pixel value of each pixel in the width at human body position to be processed in human body area to be processed, the pixel value after obtaining each pixel scaling.Above-mentioned character image treating method and apparatus realizes the processing of the pixel to the position to be processed of human body in character image automatically, and do not need to spend high cost study profession repairs figure software, automatically processes to personage in picture, high-efficient.

Description

Character image treating method and apparatus
Technical field
The present invention relates to field of image processings, more particularly to a kind of character image treating method and apparatus.
Background technique
Currently, popularizing with electronic products such as digital camera, shooting mobile phone, cameras, more and more users pass through Electronic product with shooting function carries out self-timer or shoots image for other people.But it due to light, apparatus for making a video recording, personal appearance, claps The reasons such as angle, shooting posture, lens distortion are taken the photograph, the image effect after some shootings is poor, especially the bodily form, often not to the utmost such as people Meaning, especially some users always think that the bodily form is too fat, influence the overall effect of image.For this purpose, passing through the personage of some professions It is handled using softwares such as photoshop, the bodily form is reduced, makes personage very thin, it is beautiful generous.However, using photoshop etc. Professional software learning cost is high, and trouble is manually operated, and treatment effeciency is low.
Summary of the invention
Based on this, it is necessary to aiming at the problem that traditional personage handles manual operation trouble and low efficiency, provide a kind of people Object image processing method can automatically process image, improve treatment effeciency.
In addition, there is a need to provide a kind of character image processing unit, image can be automatically processed, improves treatment effeciency.
A kind of character image processing method, comprising the following steps:
Obtain image;
The face location in described image is detected, and marks the parameter of the face;
According to the parameter of human body area to be processed in the parameter estimation described image of the face;
Human body area to be processed is determined according to the parameter of human body area to be processed;
Obtain the width at human body position to be processed in human body area to be processed;
To the pixel value of each pixel in the width at human body position to be processed in human body area to be processed into Row scaling processing, the pixel value after obtaining each pixel scaling.
A kind of character image processing unit, comprising:
Image collection module, for obtaining image;
Mark module is detected, for detecting the face location in described image, and marks the parameter of the face;
Estimation block, the ginseng for human body area to be processed in the parameter estimation described image according to the face Number;
Area determination module, for determining human body position area to be processed according to the parameter of human body area to be processed Domain;
Width detection module, for obtaining the width at human body position to be processed in human body area to be processed;
Zoom module, for each pixel in the width to human body position to be processed in human body area to be processed The pixel value of point zooms in and out processing, the pixel value after obtaining each pixel scaling.
Above-mentioned character image treating method and apparatus, after obtaining image, face location in detection image marks the ginseng of face Number is determined according to the parameter of face parameter estimation human body area to be processed according to the parameter of human body area to be processed Human body area to be processed obtains the width at human body position to be processed in human body area to be processed, to pixel therein The pixel of point zooms in and out processing, and the image after being scaled realizes automatically to the position to be processed of human body in character image The processing of pixel, do not need to spend high cost study profession repairs figure software, automatically processes to personage in picture, efficiency It is high.
Detailed description of the invention
Fig. 1 is the schematic diagram of internal structure of the terminal with shooting function in one embodiment;
Fig. 2 is the flow chart of character image processing method in one embodiment;
Fig. 3 is the flow chart of character image processing method in another embodiment;
Fig. 4 is the schematic diagram of face label;
Fig. 5 is the schematic diagram of the position of each parameter of people's body body region;
Fig. 6 is the schematic diagram of the position of each parameter of human body lumbar region;
Fig. 7 is the structural block diagram of character image processing unit in one embodiment;
Fig. 8 is the structural block diagram of character image processing unit in another embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 is the schematic diagram of internal structure of the terminal with shooting function in one embodiment.As shown in Figure 1, the terminal packet Include processor, storage medium, memory, display screen and the input unit connected by system bus.Wherein, the storage medium of terminal It is stored with operating system, further includes a kind of character image processing unit, the character image processing unit is for realizing a kind of personage Image processing method.The processor supports the operation of entire terminal for providing calculating and control ability.It is saved as in terminal The operation of character image processing unit in storage medium provides environment.The display screen of terminal can be liquid crystal display or electricity Sub- ink display screen etc., input unit can be the touch layer covered on display screen, be also possible to be arranged in terminal enclosure by Key, trace ball or Trackpad are also possible to external keyboard, Trackpad or mouse etc..The terminal can be mobile phone, tablet computer Or personal digital assistant etc..It will be understood by those skilled in the art that structure shown in Fig. 1, only and application scheme The block diagram of relevant part-structure does not constitute the restriction for the terminal being applied thereon to application scheme, specific terminal It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
Fig. 2 is the flow chart of character image processing method in one embodiment.As shown in Fig. 2, a kind of character image processing Method, comprising the following steps:
Step 202, image is obtained.
Specifically, image can acquire equipment acquisition by camera etc..The image of acquisition can be encoded, coding Format can be BMP (Bit Map, bitmap), JPEG (Joint Photographic Expert Group), GIF (Graphics Interchange Format, graph transformation format), TGA (Tagged Graphics, marked figure), EXIF (Exchangeable Image File Format, tradable image file format), PNG (Portable Network Graphics, portable network figure) etc..
It, can be by the image decoding after coding at original image information if what is obtained is the image after coding when obtaining image Stream, such as the image data of RGB (Red Green Blue) format, is then handled decoded image.
Step 204, the face location in the image is detected, and marks the parameter of the face.
Specifically, detection face location can export the coordinate of face frame automatically, determine face according to the coordinate of face frame Region, the parameter of face include the center point coordinate of face, face width and face height etc..Mark the center point coordinate of face (fx, fy), face width and height are respectively (wf, hf)。
Step 206, according to the parameter of human body area to be processed in the parameter estimation of the face image.
Specifically, step 206 includes: according to the center point coordinate of face, width and height estimation human body position to be processed Lower line coordinates, upper line coordinates, width and the center point coordinate in region etc..
Human body position to be processed can be body or waist or face or leg etc..Human body area to be processed, which refers to, includes One image range at human body position to be processed.
Step 208, human body area to be processed is determined according to the parameter of human body area to be processed.
Specifically, automatically according to the lower line coordinates of the human body area to be processed in image, upper line coordinates, width and in Heart point coordinate can determine the human body area to be processed in image.
Step 210, the width at human body position to be processed in human body area to be processed is obtained.
In the present embodiment, the edge detection of vertical direction can be carried out to human body area to be processed automatically, obtains human body The both sides of the edge line at position to be processed, obtain both sides of the edge line the distance between obtain the width at human body position to be processed.
Specifically, the edge detection of vertical direction is carried out to human body area to be processed, extracts the longest edge in two sides As the both sides of the edge line of human body area to be processed, calculates the distance between both sides of the edge line and obtain human body position to be processed Width.
The width at human body position to be processed refers to the developed width at human body position to be processed in image.
Step 212, to the pixel of each pixel in the width at human body position to be processed in human body area to be processed Value zooms in and out processing, the pixel value after obtaining each pixel scaling.
Specifically, scaling processing can be zoomed in and out according to scaling, or according to the zoom factor function pre-established Zoom in and out processing.
In one embodiment, to each pixel in the width at human body position to be processed in human body area to be processed The step of pixel value of point zooms in and out processing, obtains the pixel value after each pixel scales include:
Each pixel obtained in human body area to be processed in the width at human body position to be processed is to be processed to human body Zoom factor corresponding to the horizontal distance of the central point of area and each pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the first side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel First object pixel is obtained, the pixel of predetermined quantity around the first object pixel is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the second side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel The second target pixel points are obtained, the pixel of predetermined quantity around second target pixel points is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value.
Wherein, zoom factor can be calculated according to smooth transformation formula, smooth transformation formula such as formula (1) (2) (3):
Wherein, r is the horizontal distance of the central point of pixel and human body area to be processed;D is the width on head, i.e., wf;A and b is constant, and the value of a and b can be determined as needed.In the present embodiment, a 1, b 0.8.Smooth transformation formula It is according to constantly test, it is ensured that edge variation is small, and intermediate conversion is big.θ (r), λ (r) are intermediate variable, and f (r) is contracting Put coefficient.
Each pixel (x, y) in human body area to be processed in the width at human body position to be processed, if its with Central point (the x of human body area to be processed0, y0) horizontal distance be r, if r be less than 2d, and be located at human body portion to be processed The first side (such as left side) of the central point in position region, then obtaining first object pixel is (x0,y0- f (r)), if r is less than 2d, and Positioned at the second side (such as the right) of the central point of human body area to be processed, then obtaining the second target pixel points is (x0,y0+f (r)).Wherein, the left side and the right when left and right refers to that image is checked, when personage is in facing to viewer position.
Predetermined quantity can be chosen as needed, such as four pixels, five pixels, six pixels etc..If choosing Four pixels, then the pixel value obtained to the pixel value of four pixels using bilinear interpolation algorithm operation is as the first mesh Mark the pixel value of pixel or the second target pixel points.Bilinear interpolation first can carry out linear interpolation in X-direction, then in the Y direction Upper carry out linear interpolation, or linear interpolation is first carried out in the Y direction, then carries out linear interpolation in the X direction.
When certain pixel is located at the first side of the central point of human body area to be processed, according to human body portion to be processed The central point in position region and the corresponding zoom factor of certain pixel the step of obtaining first object pixel include:
The abscissa and ordinate for obtaining the central point of human body area to be processed, by human body position area to be processed Abscissa of the abscissa of the central point in domain as first object pixel, by the central point of human body area to be processed Ordinate subtracts ordinate of the difference of the corresponding zoom factor of certain pixel as first object pixel, according to first mesh The abscissa and ordinate for marking pixel determine first object pixel.
It is to be processed according to the human body when certain pixel is located at the second side of the central point of human body area to be processed The step of central point of area and the corresponding zoom factor of certain pixel obtain the second target pixel points include:
The abscissa and ordinate for obtaining the central point of human body area to be processed, by human body position area to be processed Abscissa of the abscissa of the central point in domain as the second target pixel points, by the central point of human body area to be processed Ordinate adds ordinate of the sum of the corresponding zoom factor of certain pixel as the second target pixel points, according to second mesh The abscissa and ordinate for marking pixel determine second target pixel points.
Above-mentioned character image processing method, after obtaining image, face location in detection image marks the parameter of face, root According to the parameter of face parameter estimation human body area to be processed, determine that human body waits for according to the parameter of human body area to be processed Treatment site region obtains the width at human body position to be processed in human body area to be processed, to the picture of pixel therein Element zooms in and out processing, and the image after being scaled realizes automatically to the pixel at the position to be processed of human body in character image Processing, do not need to spend high cost study profession repairs figure software, automatically processes to personage in picture, high-efficient.
Fig. 3 is the flow chart of character image processing method in another embodiment.Character image processing method and figure in Fig. 3 2 difference is, increases and judges whether the width at human body position to be processed is more than threshold value, is more than then to be adjusted, if not surpassing It crosses, does not then adjust, and histogram equalization processing is carried out to the image after scaling processing.As shown in figure 3, a kind of character image Processing method, comprising the following steps:
Step 302, image is obtained.
Specifically, image can acquire equipment acquisition by camera etc..The image of acquisition can be encoded, coding Format can be BMP, JPEG, GIF, TGA, EXIF, PNG etc..
It, can be by the image decoding after coding at original image information if what is obtained is the image after coding when obtaining image Stream, such as the image data of rgb format, is then handled decoded image.
Step 304, the face location in the image is detected, and marks the parameter of the face.
Specifically, detection face location can export the coordinate of face frame automatically, determine face according to the coordinate of face frame Region, the parameter of face include the center point coordinate of face, face width and face height etc..Mark the center point coordinate of face (fx, fy), face width and height are (wf, hf)。
Step 306, according to the parameter of human body area to be processed in the parameter estimation of the face image.
Specifically, step 306 includes: according to the center point coordinate of face, width and height estimation human body position to be processed Lower line coordinates, upper line coordinates, width and the center point coordinate in region etc..
Human body position to be processed can be body or waist or face or leg etc..Human body area to be processed, which refers to, includes One image range at human body position to be processed.
Step 308, human body area to be processed is determined according to the parameter of human body area to be processed.
Specifically, automatically according to the lower line coordinates of the human body area to be processed in image, upper line coordinates, width and in Heart point coordinate can determine the human body area to be processed in image.
Step 310, the width at human body position to be processed in human body area to be processed is obtained.
In the present embodiment, the edge detection of vertical direction can be carried out to human body area to be processed automatically, obtains human body The both sides of the edge line at position to be processed, obtain both sides of the edge line the distance between obtain the width at human body position to be processed.
Specifically, the edge detection of vertical direction is carried out to human body area to be processed, extracts the longest edge in two sides As the both sides of the edge line of human body area to be processed, calculates the distance between both sides of the edge line and obtain human body position to be processed Width.
The width at human body position to be processed refers to the developed width at human body position to be processed in image.
Step 312, judge whether the width at human body position to be processed in human body area to be processed is greater than threshold value, if It is, step 314, if it is not, thening follow the steps 318.
Step 314, to the pixel of each pixel in the width at human body position to be processed in human body area to be processed Value zooms in and out processing, the pixel value after obtaining each pixel scaling.
Specifically, scaling processing can be zoomed in and out according to scaling, or according to the zoom factor function pre-established Zoom in and out processing.
Each pixel obtained in human body area to be processed in the width at human body position to be processed is to be processed to human body Zoom factor corresponding to the horizontal distance of the central point of area and each pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the first side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel First object pixel is obtained, the pixel of predetermined quantity around the first object pixel is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the second side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel The second target pixel points are obtained, the pixel of predetermined quantity around second target pixel points is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value.
Step 316, without processing.
Step 318, histogram equalization processing is carried out to the image after scaling.
Specifically, the contrast of image can be improved in histogram equalization processing, and it is uneven to reduce transformation bring color. Histogram equalization processing is to become the grey level histogram of original image in whole ashes from some gray scale interval for comparing concentration Being uniformly distributed in degree range.
In addition, the method may also include that the image after showing histogram equalization processing after step 318.Specifically Ground shows the image after histogram equalization processing on the interface of terminal.
Above-mentioned character image processing method, after obtaining image, face location in detection image marks the parameter of face, root According to the parameter of face parameter estimation human body area to be processed, determine that human body waits for according to the parameter of human body area to be processed Treatment site region obtains the width at human body position to be processed in human body area to be processed, to the picture of pixel therein Element zooms in and out processing, and the image after being scaled realizes automatically to the pixel at the position to be processed of human body in character image Processing, do not need to spend high cost study profession repairs figure software, automatically processes to personage in picture, high-efficient;Sentence Disconnected width can reduce the data volume of processing;The contrast of image can be improved in histogram equalization processing, reduces transformation bring Color is uneven.
In other embodiments, step 318 can also be omitted.
Character image processing method of the invention is described below in detail and is applied to processing character physical region, detailed process Include:
(a1) image is obtained, by image decoding at rgb format;
(a2) face location in detection image, marks the parameter of face, and face parameter includes that the center point coordinate of face is (fx, fy), face width and height be respectively (wf, hf)。
Fig. 4 is the schematic diagram of face label.As shown in figure 4, the center point coordinate of face is (fx, fy), face width and height Degree is respectively (wf, hf)。
(a3) according to the parameter in the parameter estimation human body region of face, human body region parameter includes human body Lower line coordinates, upper line coordinates, width and the center point coordinate in region.
Specifically, the body paint rule for finding and proposing according to Leonardo da Vinci: the ratio of standardized human body is that head is height 1/8, shoulder breadth is the 1/4 of height, with navel be, upper lower part of the body ratio be 5:8, meet " golden section " law, with positive dough figurine As for, the parameter calculation formula in human body region is as follows:
Human body total height: high=8*hf
Body region it is offline are as follows: hby=fy-7.5*hf
Body region it is online are as follows: hty=hby+8hf=fy+0.5hf
The width of body region is denoted as head width 3 times: wc=3*wf
The center point coordinate of body region are as follows: (fx, fy-3.5hf)
Fig. 5 is the schematic diagram of the position of each parameter of people's body body region.As shown in figure 5, the offline of body region is people Body starting point coordinate hby, the online of body region is hty, the width of body region is wc
(a4) human body region is determined according to the parameter in human body region.
Specifically, human body region is between offline on body region, with body region centre coordinate (fx, fy- 3.5hf) centered on, width wcRegion.
(a5) the width w of body is detected1
Specifically, the edge detection that vertical direction is carried out in human body region, obtains the both sides of the edge of human body Line, obtain both sides of the edge line the distance between obtain the width w of human body1
(a6) judge whether the width of body is greater than body threshold value, if so, needing to adjust, that is, converted, if it is not, then It is not required to adjust.
Specifically, body threshold value can be 2 times of head width 2*wf
(a7) zoom factor is obtained.
Zoom factor can be calculated according to smooth transformation formula, smooth transformation formula such as formula (1) (2) (3):
Wherein, r is the horizontal distance of the central point in pixel and human body region;D is the width on head, i.e. wf;A and B is constant, and the value of a and b can be determined as needed.In the present embodiment, a 1, b 0.8.Smooth transformation formula is basis Constantly test, it is ensured that edge variation is small, and intermediate conversion is big.θ (r), λ (r) are intermediate variable, and f (r) is scaling system Number.
(a8) pixel in the width of human body is converted.
Each pixel (x, y) in human body region in the width of body, if its in human body region Heart point (fx, fy-3.5hf) horizontal distance be r, if r be less than 2d, and be located at human body body region central point the first side (such as left side), then obtaining first object pixel is (fx, fy-3.5hf- f (r)), choose first object pixel (fx, fy- 3.5hf- f (r)) around four pixels bilinear interpolation after pixel be the pixel pixel value;If r is less than 2d, and Positioned at the second side (such as the right) of the central point in human body region, then obtaining the second target pixel points is (fx, fy-3.5hf+f (r)) first object pixel (f, is chosenx, fy-3.5hf+ f (r)) around four pixels bilinear interpolation after pixel be The pixel value of the pixel.
(a9) histogram equalization processing is carried out to by transformed image.
Specifically, the contrast of image can be improved in histogram equalization processing, and it is uneven to reduce transformation bring color. Histogram equalization processing is to become the grey level histogram of original image in whole ashes from some gray scale interval for comparing concentration Being uniformly distributed in degree range.
Character image processing method of the invention is described below in detail and is applied to processing personage lumbar region, detailed process Include:
(b1) image is obtained, by image decoding at rgb format;
(b2) face location in detection image, marks the parameter of face, and face parameter includes that the center point coordinate of face is (fx, fy), face width and height be respectively (wf, hf)。
Join shown in Fig. 4 again, the center point coordinate of face is (fx, fy), face width and height be respectively (wf, hf)。
(b3) according to the parameter in the parameter estimation human body waist region of face, human body waist region parameter includes human body waist Lower line coordinates, upper line coordinates, width and the center point coordinate in region.
Specifically, the body paint rule for finding and proposing according to Leonardo da Vinci: the ratio of standardized human body is that head is height 1/8, shoulder breadth is the 1/4 of height, with navel be, upper lower part of the body ratio be 5:8, meet " golden section " law, with positive dough figurine As for, the parameter calculation formula in human body region is as follows:
Human body total height: high=8*hf
Body region it is offline are as follows: hby=fy-4.5*hf
Body region it is online are as follows: hty=hby+2hf=fy-2.5hf
The width of body region is denoted as head width 3 times: we=2*wf
The center point coordinate of body region are as follows: (fx, fy-3.5hf)
Fig. 6 is the schematic diagram of the position of each parameter of human body lumbar region.As shown in fig. 6, the offline of body region is people Body starting point coordinate hby, the online of lumbar region is hty, the width of lumbar region is we
(b4) human body waist region is determined according to the parameter in human body waist region.
Specifically, human body region is between offline on body region, with body region centre coordinate (fx, fy- 3.5hf) centered on, width weRegion.
(b5) the width w of waist is detected2
Specifically, the edge detection that vertical direction is carried out in human body waist region, obtains the both sides of the edge of human body waist Line, obtain both sides of the edge line the distance between obtain the width w of human body waist2
(b6) judge whether the width of waist is greater than waist threshold value, if so, needing to adjust, that is, converted, if it is not, then It is not required to adjust.
Specifically, threshold value can be 2 times of head width 2*wf
(b7) zoom factor is obtained.
Zoom factor can be calculated according to smooth transformation formula, smooth transformation formula such as formula (1) (2) (3):
Wherein, r is the horizontal distance of pixel and human body waist central point;D is the width on head, i.e. wf;A and b is normal The value of number, a and b can be determined as needed.In the present embodiment, a 1, b 0.8.Smooth transformation formula is according to constantly examination It tests, it is ensured that edge variation is small, and intermediate conversion is big.θ (r), λ (r) are intermediate variable, and f (r) is zoom factor.
(b8) pixel in the width of human body waist is converted.
Each pixel (x, y) in human body region in the width of waist, if its in human body waist region Heart point (fx, fy-3.5hf) horizontal distance be r, if r be less than 2d, and be located at human body lumbar region central point the first side (such as left side), then obtaining first object pixel is (fx, fy-3.5hf- f (r)), choose first object pixel (fx, fy- 3.5hf- f (r)) around four pixels bilinear interpolation after pixel be the pixel pixel value;If r is less than 2d, and Positioned at the second side (such as the right) of the central point in human body waist region, then obtaining the second target pixel points is (fx, fy-3.5hf+f (r)) first object pixel (f, is chosenx, fy-3.5hf+ f (r)) around four pixels bilinear interpolation after pixel be The pixel value of the pixel.
(b9) histogram equalization processing is carried out to by transformed image.
Specifically, the contrast of image can be improved in histogram equalization processing, and it is uneven to reduce transformation bring color. Histogram equalization processing is to become the grey level histogram of original image in whole ashes from some gray scale interval for comparing concentration Being uniformly distributed in degree range.
Fig. 7 is the structural block diagram of character image processing unit in one embodiment.Character image processing unit pair in Fig. 7 The virtual functions framework constructed by the character image processing method of Fig. 2, the not detailed place of description, which please refers in method, retouches State, the functional module in Fig. 7 be not limited to the present embodiment described in division mode.As shown in fig. 7, a kind of character image processing Device, including image collection module 710, detection mark module 720, estimation block 730, area determination module 740, width detection Module 750 and Zoom module 760.Wherein:
Image collection module 710 is for obtaining image.
It specifically,, can be by the image decoding after coding at original graph if what is obtained is the image after coding when obtaining image Then piece information flow is handled decoded image such as the image data of RGB (Red Green Blue) format.
Detection mark module 720 is used to detect the face location in the image, and marks the parameter of the face.
Specifically, detection face location can export the coordinate of face frame automatically, determine face according to the coordinate of face frame Region, the parameter of face include the center point coordinate of face, face width and face height etc..Mark the center point coordinate of face (fx, fy), face width and height are respectively (wf, hf)。
Detection mark module 720 is also used to mark the center point coordinate of the face, the width of face and height.
Estimation block 730 is used for the parameter according to human body area to be processed in the parameter estimation of the face image.
Specifically, estimation block 730 is according to the center point coordinate of face, width and height estimation human body position area to be processed Lower line coordinates, upper line coordinates, width and the center point coordinate in domain etc..Human body position to be processed can for body or waist or face or Leg etc..Human body area to be processed refers to an image range comprising human body position to be processed.
Area determination module 740 is used to determine human body position area to be processed according to the parameter of human body area to be processed Domain.Specifically, automatically according to lower line coordinates, upper line coordinates, width and the central point of the human body area to be processed in image Coordinate can determine the human body area to be processed in image.
Width detection module 750 is used to obtain the width at human body position to be processed in human body area to be processed.
In the present embodiment, the edge detection of vertical direction can be carried out to human body area to be processed automatically, obtains human body The both sides of the edge line at position to be processed, obtain both sides of the edge line the distance between obtain the width at human body position to be processed.
Specifically, the edge detection of vertical direction is carried out to human body area to be processed, extracts the longest edge in two sides As the both sides of the edge line of human body area to be processed, calculates the distance between both sides of the edge line and obtain human body position to be processed Width.
The width at human body position to be processed refers to the developed width at human body position to be processed in image.
Zoom module 760 is used for each pixel in the width at human body position to be processed in human body area to be processed The pixel value of point zooms in and out processing, the pixel value after obtaining each pixel scaling.
Specifically, scaling processing can be zoomed in and out according to scaling, or according to the zoom factor function pre-established Zoom in and out processing.
In one embodiment, to each pixel in the width at human body position to be processed in human body area to be processed The step of pixel value of point zooms in and out processing, obtains the pixel value after each pixel scales include:
Each pixel obtained in human body area to be processed in the width at human body position to be processed is to be processed to human body Zoom factor corresponding to the horizontal distance of the central point of area and each pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the first side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel First object pixel is obtained, the pixel of predetermined quantity around the first object pixel is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Central point the second side, then according to the central point of human body area to be processed and the corresponding zoom factor of certain pixel The second target pixel points are obtained, the pixel of predetermined quantity around second target pixel points is chosen, by the picture of the predetermined quantity The pixel value of vegetarian refreshments carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as the new of certain pixel Pixel value.
Wherein, zoom factor can be calculated according to smooth transformation formula, smooth transformation formula such as formula (1) (2) (3):
Wherein, r is the horizontal distance of the central point of pixel and human body area to be processed;D is the width on head, i.e., wf;A and b is constant, and the value of a and b can be determined as needed.In the present embodiment, a 1, b 0.8.Smooth transformation formula It is according to constantly test, it is ensured that edge variation is small, and intermediate conversion is big.θ (r), λ (r) are intermediate variable, and f (r) is contracting Put coefficient.
Each pixel (x, y) in human body area to be processed in the width at human body position to be processed, if its with Central point (the x of human body area to be processed0, y0) horizontal distance be r, if r be less than 2d, and be located at human body portion to be processed The first side (such as left side) of the central point in position region, then obtaining first object pixel is (x0,y0- f (r)), if r is less than 2d, and Positioned at the second side (such as the right) of the central point of human body area to be processed, then obtaining the second target pixel points is (x0,y0+f (r)).Wherein, the left side and the right when left and right refers to that image is checked, when personage is in facing to viewer position.
Predetermined quantity can be chosen as needed, such as four pixels, five pixels, six pixels etc..If choosing Four pixels, then the pixel value obtained to the pixel value of four pixels using bilinear interpolation algorithm operation is as the first mesh Mark the pixel value of pixel or the second target pixel points.Bilinear interpolation first can carry out linear interpolation in X-direction, then in the Y direction Upper carry out linear interpolation, or linear interpolation is first carried out in the Y direction, then carries out linear interpolation in the X direction.
Zoom module 760 is also used to obtain the abscissa and ordinate of the central point of human body area to be processed, will Abscissa of the abscissa of the central point of human body area to be processed as first object pixel, the human body is to be processed The ordinate of the central point of area subtracts the difference of the corresponding zoom factor of certain pixel as first object pixel Ordinate determines the first object pixel according to the abscissa of the first object pixel and ordinate;And obtain the people The abscissa and ordinate of the central point of body area to be processed, by the horizontal seat of the central point of human body area to be processed It is denoted as the abscissa of the second target pixel points, the ordinate of the central point of human body area to be processed is added into certain picture Ordinate of the sum of the corresponding zoom factor of vegetarian refreshments as the second target pixel points, according to the abscissa of second target pixel points Second target pixel points are determined with ordinate.
Above-mentioned character image processing unit, after obtaining image, face location in detection image marks the parameter of face, root According to the parameter of face parameter estimation human body area to be processed, determine that human body waits for according to the parameter of human body area to be processed Treatment site region obtains the width at human body position to be processed in human body area to be processed, to the picture of pixel therein Element zooms in and out processing, and the image after being scaled realizes automatically to the pixel at the position to be processed of human body in character image Processing, do not need to spend high cost study profession repairs figure software, automatically processes to personage in picture, high-efficient.
Fig. 8 is the structural block diagram of character image processing unit in another embodiment.As shown in figure 8, above-mentioned character image Processing unit, in addition to include image collection module 710, detection mark module 720, estimation block 730, area determination module 740, Width detection module 750 and Zoom module 760 further include judgment module 752 and equalization processing module 770.Wherein:
Judgment module 752 is for judging whether the width at human body position to be processed in human body area to be processed is greater than Threshold value, if so, the Zoom module is to each pixel in the width at human body position to be processed in human body area to be processed The pixel value of point zooms in and out processing, if it is not, then without processing.
Equalization processing module 770 is used to carry out histogram equalization processing to the image after scaling.
Specifically, the contrast of image can be improved in histogram equalization processing, and it is uneven to reduce transformation bring color. Histogram equalization processing is to become the grey level histogram of original image in whole ashes from some gray scale interval for comparing concentration Being uniformly distributed in degree range.
In other embodiments, a kind of character image processing unit may include image collection module 710, detection mark module 720, estimation block 730, area determination module 740, width detection module 750, Zoom module 760, judgment module 752 and equilibrium Change any possible combination of processing module 770.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage is situated between Matter can be magnetic disk, CD, read-only memory (Read-Only Memory, ROM) etc..
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Protect range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.

Claims (18)

1. a kind of character image processing method, comprising the following steps:
Obtain image;
The face location in described image is detected, and marks the parameter of the face;
According to the parameter of human body area to be processed in the parameter estimation described image of the face;
Human body area to be processed is determined according to the parameter of human body area to be processed;
Obtain the width at human body position to be processed in human body area to be processed;
According to the zoom factor function pre-established, automatically to human body position to be processed in human body area to be processed The pixel value of each pixel in width zooms in and out processing, the pixel value after obtaining each pixel scaling;Include:
Each pixel in human body area to be processed in the width at human body position to be processed is obtained to human body position to be processed Zoom factor corresponding to the horizontal distance of the central point in region and each pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed First side of central point then obtains the abscissa and ordinate of the central point of human body area to be processed, by the people Abscissa of the abscissa of the central point of body area to be processed as first object pixel, by human body portion to be processed The ordinate of the central point in position region subtracts the difference of the corresponding zoom factor of certain described pixel as first object pixel Ordinate determines the first object pixel according to the abscissa of the first object pixel and ordinate;And according to institute State the new pixel value that first object pixel obtains certain pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Second side of central point then obtains the abscissa and ordinate of the central point of human body area to be processed, by the people Abscissa of the abscissa of the central point of body area to be processed as the second target pixel points, by human body portion to be processed The ordinate of the central point in position region adds the sum of the corresponding zoom factor of certain described pixel as the second target pixel points Ordinate determines second target pixel points according to the abscissa of second target pixel points and ordinate;And according to institute State the new pixel value that the second target pixel points obtain certain pixel.
2. the method according to claim 1, wherein the step of parameter of the label face, includes:
Mark the center point coordinate of the face, the width of face and height;
The step of parameter of human body area to be processed, includes: in the parameter estimation described image according to the face
According to the lower line coordinates, online of the center point coordinate of the face, width and height estimation human body area to be processed Coordinate, width and center point coordinate.
3. the method according to claim 1, wherein described obtain human body in the human body area to be processed The step of width at position to be processed includes:
The edge detection that vertical direction is carried out to human body area to be processed, obtains the two sides at human body position to be processed Edge line, obtain both sides of the edge line the distance between obtain the width at human body position to be processed.
4. the method according to claim 1, wherein to be processed to human body in human body area to be processed The step of pixel value of each pixel in the width at position zooms in and out processing, obtains the pixel value after each pixel scales is also Include:
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed First side of central point is then according to the central point of human body area to be processed and the corresponding scaling of certain described pixel Number obtains first object pixel, the pixel of predetermined quantity around the first object pixel is chosen, by the predetermined number The pixel value of the pixel of amount carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as described in certain The new pixel value of pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Second side of central point is then according to the central point of human body area to be processed and the corresponding scaling of certain described pixel Number obtains the second target pixel points, the pixel of predetermined quantity around second target pixel points is chosen, by the predetermined number The pixel value of the pixel of amount carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as described in certain The new pixel value of pixel.
5. the method according to claim 1, wherein the zoom factor is calculated according to smooth transformation formula It arrives, the smooth transformation formula are as follows:
Wherein, r is the horizontal distance of the central point of pixel and human body area to be processed, and d is the width on head, and a and b are Constant, θ (r), λ (r) are intermediate variable, and f (r) is zoom factor.
6. the method according to claim 1, wherein obtaining people in the human body area to be processed described After the step of width at body position to be processed, the method also includes:
Judge whether the width at human body position to be processed in human body area to be processed is greater than threshold value, if so, to institute The pixel value for stating each pixel in human body area to be processed in the width at human body position to be processed zooms in and out processing, if It is no, then without processing.
7. method according to any one of claim 1 to 6, which is characterized in that the method also includes:
Histogram equalization processing is carried out to the image after scaling.
8. method according to any one of claim 1 to 6, which is characterized in that the human body position to be processed is body Or waist.
9. a kind of character image processing unit characterized by comprising
Image collection module, for obtaining image;
Mark module is detected, for detecting the face location in described image, and marks the parameter of the face;
Estimation block, the parameter for human body area to be processed in the parameter estimation described image according to the face;
Area determination module, for determining human body area to be processed according to the parameter of human body area to be processed;
Width detection module, for obtaining the width at human body position to be processed in human body area to be processed;
Zoom module, for according to the zoom factor function pre-established, automatically to people in human body area to be processed The pixel value of each pixel in the width at body position to be processed zooms in and out processing, the pixel after obtaining each pixel scaling Value;
The Zoom module is also used to obtain each pixel in human body area to be processed in the width at human body position to be processed It puts to zoom factor corresponding to the horizontal distance of the central point of human body area to be processed and each pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed First side of central point then obtains the abscissa and ordinate of the central point of human body area to be processed, by the people Abscissa of the abscissa of the central point of body area to be processed as first object pixel, by human body portion to be processed The ordinate of the central point in position region subtracts the difference of the corresponding zoom factor of certain described pixel as first object pixel Ordinate determines the first object pixel according to the abscissa of the first object pixel and ordinate;According to described First object pixel obtains the new pixel value of certain pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Second side of central point then obtains the abscissa and ordinate of the central point of human body area to be processed, by the people Abscissa of the abscissa of the central point of body area to be processed as the second target pixel points, by human body portion to be processed The ordinate of the central point in position region adds the sum of the corresponding zoom factor of certain described pixel as the second target pixel points Ordinate determines second target pixel points according to the abscissa of second target pixel points and ordinate;According to described Second target pixel points obtain the new pixel value of certain pixel.
10. device according to claim 9, which is characterized in that the detection mark module is also used to mark the face Center point coordinate, face width and height;
The estimation block is also used to according to the center point coordinate of the face, width and height estimation human body position area to be processed Lower line coordinates, upper line coordinates, width and the center point coordinate in domain.
11. device according to claim 9, which is characterized in that the width detection module is also used to wait for the human body Treatment site region carries out the edge detection of vertical direction, obtains the both sides of the edge line at human body position to be processed, obtains two sides Edge line the distance between obtain the width at human body position to be processed.
12. device according to claim 9, which is characterized in that the Zoom module is also used to:
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed First side of central point is then according to the central point of human body area to be processed and the corresponding scaling of certain described pixel Number obtains first object pixel, the pixel of predetermined quantity around the first object pixel is chosen, by the predetermined number The pixel value of the pixel of amount carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as described in certain The new pixel value of pixel;
If the horizontal distance of certain pixel is less than the head width of presupposition multiple and is located at human body area to be processed Second side of central point is then according to the central point of human body area to be processed and the corresponding scaling of certain described pixel Number obtains the second target pixel points, the pixel of predetermined quantity around second target pixel points is chosen, by the predetermined number The pixel value of the pixel of amount carries out interpolation and obtains the pixel value of interpolation, using the pixel value of the interpolation as described in certain The new pixel value of pixel.
13. device according to claim 9, which is characterized in that
The zoom factor is calculated according to smooth transformation formula, the smooth transformation formula are as follows:
Wherein, r is the horizontal distance of the central point of pixel and human body area to be processed, and d is the width on head, and a and b are Constant, θ (r), λ (r) are intermediate variable, and f (r) is zoom factor.
14. device according to claim 9, which is characterized in that described device further include:
Judgment module, for judging whether the width at human body position to be processed in human body area to be processed is greater than threshold Value, if so, the Zoom module is to each picture in the width at human body position to be processed in human body area to be processed The pixel value of vegetarian refreshments zooms in and out processing, if it is not, then without processing.
15. the device according to any one of claim 9 to 14, which is characterized in that described device further include:
Equalization processing module, for carrying out histogram equalization processing to the image after scaling.
16. the device according to any one of claim 9 to 14, which is characterized in that the human body position to be processed is body Body or waist.
17. a kind of storage medium, is stored thereon with computer program, which is characterized in that can when described program is executed by processor Realize such as character image processing method described in any item of the claim 1 to 8.
18. a kind of terminal device, including storage medium, processor and storage can be transported on said storage and on a processor Capable computer program, the processor realize such as figure map described in any item of the claim 1 to 8 when executing described program As processing method.
CN201510613580.6A 2015-09-23 2015-09-23 Character image treating method and apparatus Active CN106558040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510613580.6A CN106558040B (en) 2015-09-23 2015-09-23 Character image treating method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510613580.6A CN106558040B (en) 2015-09-23 2015-09-23 Character image treating method and apparatus

Publications (2)

Publication Number Publication Date
CN106558040A CN106558040A (en) 2017-04-05
CN106558040B true CN106558040B (en) 2019-07-19

Family

ID=58413378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510613580.6A Active CN106558040B (en) 2015-09-23 2015-09-23 Character image treating method and apparatus

Country Status (1)

Country Link
CN (1) CN106558040B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395958B (en) * 2017-06-30 2019-11-15 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN108090908B (en) * 2017-12-07 2020-02-04 深圳云天励飞技术有限公司 Image segmentation method, device, terminal and storage medium
CN110415168B (en) * 2018-04-27 2022-12-02 武汉斗鱼网络科技有限公司 Face local scaling processing method, storage medium, electronic device and system
US11410268B2 (en) 2018-05-31 2022-08-09 Beijing Sensetime Technology Development Co., Ltd Image processing methods and apparatuses, electronic devices, and storage media
CN110555794B (en) * 2018-05-31 2021-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110555806B (en) * 2018-05-31 2022-09-27 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN109034032B (en) * 2018-07-17 2022-01-11 北京世纪好未来教育科技有限公司 Image processing method, apparatus, device and medium
CN109214317B (en) * 2018-08-22 2021-11-12 北京慕华信息科技有限公司 Information quantity determination method and device
CN109472753B (en) * 2018-10-30 2021-09-07 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and computer storage medium
CN111626920A (en) * 2020-05-09 2020-09-04 北京字节跳动网络技术有限公司 Picture processing method and device and electronic equipment
CN112907569B (en) * 2021-03-24 2024-03-15 贝壳找房(北京)科技有限公司 Head image region segmentation method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6178259B1 (en) * 1998-04-17 2001-01-23 Unisys Corporation Method of compensating for pixel histogram distortion in an object recognition system
CN101184143A (en) * 2006-11-09 2008-05-21 松下电器产业株式会社 Image processor and image processing method
CN101221657A (en) * 2008-01-24 2008-07-16 杭州华三通信技术有限公司 Image zoom processing method and device
CN101478629A (en) * 2008-01-04 2009-07-08 华晶科技股份有限公司 Image processing process for regulating human face ratio
CN103565441A (en) * 2012-07-19 2014-02-12 株式会社百利达 Biometric apparatus and computer-readable storage medium storing body image creating program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6178259B1 (en) * 1998-04-17 2001-01-23 Unisys Corporation Method of compensating for pixel histogram distortion in an object recognition system
CN101184143A (en) * 2006-11-09 2008-05-21 松下电器产业株式会社 Image processor and image processing method
CN101478629A (en) * 2008-01-04 2009-07-08 华晶科技股份有限公司 Image processing process for regulating human face ratio
CN101221657A (en) * 2008-01-24 2008-07-16 杭州华三通信技术有限公司 Image zoom processing method and device
CN103565441A (en) * 2012-07-19 2014-02-12 株式会社百利达 Biometric apparatus and computer-readable storage medium storing body image creating program

Also Published As

Publication number Publication date
CN106558040A (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN106558040B (en) Character image treating method and apparatus
Ding et al. Importance filtering for image retargeting
US9344690B2 (en) Image demosaicing
Jia et al. RIHOOP: Robust invisible hyperlinks in offline and online photographs
US20150363922A1 (en) Super-resolution from handheld camera
CN105046708B (en) A kind of color correction objective evaluation method consistent with subjective perception
CN103745430B (en) Rapid beautifying method of digital image
CN105141838B (en) Demosaicing methods and the device for using this method
CN108419028A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN107395958A (en) Image processing method and device, electronic equipment and storage medium
CN104657994B (en) A kind of method and system that image consistency is judged based on optical flow method
CN104350743B (en) For mixed image demosaicing and the system of distortion, method and computer program product
CN108961227A (en) A kind of image quality evaluating method based on airspace and transform domain multiple features fusion
Abdulrahman et al. Color image stegananalysis using correlations between RGB channels
Zhang et al. A light dual-task neural network for haze removal
CN109583341B (en) Method and device for detecting multi-person skeleton key points of image containing portrait
Rauhala et al. A novel interface to sensor networks using handheld augmented reality
CN103854020B (en) Character recognition method and device
CN110321452A (en) A kind of image search method based on direction selection mechanism
CN111275610A (en) Method and system for processing face aging image
CN106355559A (en) Image sequence denoising method and device
CN116912467A (en) Image stitching method, device, equipment and storage medium
CN113807251A (en) Sight estimation method based on appearance
CN107087114B (en) Shooting method and device
Zeng et al. Subpixel image quality assessment syncretizing local subpixel and global pixel features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230625

Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right