CN101969570B - Boundary coefficient calculation method of image and image noise removal method - Google Patents

Boundary coefficient calculation method of image and image noise removal method Download PDF

Info

Publication number
CN101969570B
CN101969570B CN 200910109108 CN200910109108A CN101969570B CN 101969570 B CN101969570 B CN 101969570B CN 200910109108 CN200910109108 CN 200910109108 CN 200910109108 A CN200910109108 A CN 200910109108A CN 101969570 B CN101969570 B CN 101969570B
Authority
CN
China
Prior art keywords
pixel
value
gray
noise elimination
critical value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910109108
Other languages
Chinese (zh)
Other versions
CN101969570A (en
Inventor
张郡文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huirong Technology (Shenzhen) Co.,Ltd.
Silicon Motion Inc
Original Assignee
Silicon Motion Shenzhen Inc
Silicon Motion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Motion Shenzhen Inc, Silicon Motion Inc filed Critical Silicon Motion Shenzhen Inc
Priority to CN 200910109108 priority Critical patent/CN101969570B/en
Publication of CN101969570A publication Critical patent/CN101969570A/en
Application granted granted Critical
Publication of CN101969570B publication Critical patent/CN101969570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses a boundary coefficient calculation method of an image, wherein the image comprises a plurality of pixels arrayed in the form of Bayer Pattern; the method comprises the steps of: (a) calculating an average gray scale of a plurality of particular types of pixels within a particular scope; (b) calculating a gray scale difference between the particular types of pixels within the particular scope and the average gray scale in order to generate a plurality of gray scale difference values; (c) locating a particular pixel having a maximal gray scale difference value according to the gray scale difference values; and (e) calculating a proportional value of the average gray scale and the maximal gray scale difference value and using the proportional value as the boundary coefficient.

Description

Border coefficient computational methods and the image noise elimination method of image
Technical field
The present invention relates to the computational methods of the border coefficient in the computed image and the image noise elimination method of using the method, be specifically related to the computational methods of the border coefficient in the computed image and the image noise elimination method that adopts different weights to process according to this border coefficient.
Background technology
Fig. 1 shows the calcspar of the image processing system 100 of known technology.As shown in Figure 1, image processing system 100 comprises an imageing sensor 101 (for example Charged Coupled Device, Charge-coupledDevice, CCD), an analogue amplifier 103, an analog to digital converter 105 and an image processing module 107.Imageing sensor 101 changes into the quantity of electric charge (i.e. a sampled image signal SIS) in order to sensing from the incident light (i.e. an analog picture signal AIS) of object and with incident light quantity.Then analogue amplifier 103 amplifies sampled image signal SIS and amplifies rear sampled image signal ASIS to produce.Then analog to digital converter 105 can convert sampled image signal ASIS after amplifying to rgb format data image signal S RGB Image processing module 107 can will have the data image signal S of rgb format RGBChange into the data image signal S with yuv format YUV, and with data image signal S YUVBe sent to display unit 109 and play, or be sent to and be stored to storage device 113 after compression processing module 111 is compressed.Herein image processing module 107 has been contained all images and has been processed the software and hardware that may use, and for example format conversion, interpolation are processed, coding and decoding etc. since this part content is known to those skilled in the art knows, so do not repeat them here.
Fig. 2 shows the schematic diagram of the Bayer pattern (Bayer Pattern) of known technology, and the Bayer pattern is the arrangement mode of a kind of pseudo-colour filtering array for imageing sensor (Color Filter Array).Shown in Fig. 2 a, the Bayer pattern has specific arrangement mode, and namely the arrangement mode of the first row 201 is that RGRGR, the second row 203 become again RGRGR for the arrangement mode of GBGBG the third line.Fig. 2 b, 2c also show the Bayer pattern, can find out that by Fig. 2 b, 2c its queueing discipline is the same with Fig. 2 a, and just Fig. 2 a represents when the center with the R pixel, and Fig. 2 b, 2c represent when the center with B, G pixel.
Yet, the noise of image is not done special processing in the middle of the known technology, though perhaps there is Denoising disposal must consume sizable system resource and time.
Summary of the invention
Therefore, a purpose of the present invention is to provide a kind of image noise elimination method, to remove the noise in the image.
Another object of the present invention is to provide the method for the border coefficient in a kind of calculating one image.
One embodiment of the invention have disclosed a kind of border coefficient computational methods of image, wherein this image comprises a plurality of pixels, and these a plurality of pixels are to arrange with the form of Bayer pattern (Bayer pattern), and the method comprises: the average gray that (a) calculates a plurality of particular type pixels in the particular range of this image; (b) calculate the gray difference of those interior particular type pixels of this particular range and this average gray to produce a plurality of gray difference values; (c) find out a specific pixel with maximum gray difference value according to those gray difference values; And (e) calculate this average gray and a ratio value that should maximum gray difference value with as this border coefficient.
Another embodiment of the present invention has disclosed a kind of image noise elimination method, wherein this image comprises a plurality of pixels, and these a plurality of pixels are to arrange with the form of Bayer pattern (Bayer pattern), and the method comprises: the average gray that (a) calculates a plurality of particular type pixels in one first scope of this image; (b) calculate the gray difference of those particular type pixels and this average gray to produce a plurality of gray difference values; (c) find out the specific pixel with maximum gray difference value according to those gray difference values; (d) determine one second scope according to the object pixel in this specific pixel and this first scope; And (e) optionally utilize the pixel in this second scope to adjust the gray value of this object pixel to remove noise.
According to the above embodiments, can calculate border coefficient and carry out different weighting actions according to different border coefficients with the different weights value, with the noise in eliminating the effects of the act.
Description of drawings
Fig. 1 is the calcspar of the image processing system of known technology;
Fig. 2 a, 2b, 2c are the schematic diagrames of the Bayer pattern (Bayer Pattern) of known technology;
Fig. 3 is the part flow chart according to the image noise elimination method of one embodiment of the invention;
Fig. 4 is the part flow chart according to the image noise elimination method of one embodiment of the invention.
[primary clustering symbol description]
100 image processing systems
101 imageing sensors
103 analogue amplifiers
105 analog to digital converters
107 image processing modules
109 display unit
111 compression processing modules
113 storage devices
201,203,205 row
301~333 steps
Embodiment
In the middle of specification and claims, used some vocabulary to censure specific assembly.The person with usual knowledge in their respective areas should understand, and hardware manufacturer may be called same assembly with different nouns.This specification and claims book is not used as distinguishing the mode of assembly with the difference of title, but the criterion that is used as distinguishing with the difference of assembly on function.Be an open term mentioned " comprising " in the middle of specification and the claim in the whole text, so should be construed to " comprise but be not limited to ".In addition, " couple " word and comprise any means that indirectly are electrically connected that directly reach at this.Therefore, be coupled to one second device if describe a first device in the literary composition, then represent this first device and can directly be electrically connected in this second device, or indirectly be electrically connected to this second device by other device or connection means.
Fig. 3 and Fig. 4 show respectively the part flow chart according to the image noise elimination method of one embodiment of the invention, please jointly with reference to this two figure more to understand the present invention.Image noise elimination method can software according to an embodiment of the invention, the form of firmware or hardware is implemented in the image processing module 107 of Fig. 1.
Fig. 3 comprises following steps:
Step 301
Use the Bayer pattern.
Step 303
Judgement will be to belong to the G pixel to its pixel (object pixel) of carrying out Denoising disposal, or the R/B pixel one of them.If belong to the G pixel, then arrive step 307, if the R/B pixel one of them, then carry out step 305.In more detail, when judging pixel, be to judge that the pixel that is positioned at Bayer pattern center is G or R/B pixel.Take Fig. 2 a, 2b as example, the pixel that is positioned at Bayer pattern center is R/B pixel (Rt/Bt), then then carry out step 305.And Fig. 2 c is example, and the pixel that is positioned at Bayer pattern center is G pixel (Gt), then then carry out step 307.
Step 305
Use R/B pixel frame (R/B window).Take Fig. 2 a as example, it also is R0~R7 near 8 R pixels of object pixel Rt that the R pixel frame refers to.Take Fig. 2 b as example, it also is B0~B7 near 8 B pixels of object pixel Bt that the B pixel frame refers to.B pixel frame and R pixel frame are the same with the relative position of object pixel on the Bayer pattern, so be called the R/B pixel frame.
Step 307
Use G pixel frame (G window).Take Fig. 2 c as example, the G pixel frame is near object pixel G t12 G pixels, also be G 0~G 11
Step 309
One of them comes the average gray Avg of pixel in the calculating pixel frame with G pixel frame or R/B pixel frame.Take the G pixel frame as example, i.e. the average gray AvgG of calculating pixel G0~G11.Take the R pixel frame as example, i.e. the average gray AvgR of calculating pixel R0~R7.Take the B pixel frame as example, i.e. the average gray AvgB of calculating pixel B0~B7.
Step 311
The difference D of a plurality of pixels in the difference calculating pixel frame (can calculate some or whole pixel) and average gray Avg.Take the G pixel frame as example, i.e. the difference DG0 of calculating pixel G0~G11 and average gray AvgG~DG11, perhaps the difference DG0 of calculating pixel G0~G7 and average gray AvgG~DG7 only.Take the R pixel frame as example, i.e. the difference DR0 of calculating pixel R0~R7 and average gray AvgR~DR7.Take the B pixel frame as example, i.e. the difference DB0 of calculating pixel B0~B7 and average gray AvgB~DB7.
Step 313
Find out the pixel with maximum gray difference value Dmax and indicate its position.For example, maximum is that DR1 then is decided to be Dmax with it among DR0~DR7, and marks pixel R1.Because the difference of pixel R1 and average gray AvgR is maximum, so may there be the border to exist between pixel R1 and the pixel Rt.
Step 315
Utilize the resulting information of step 313 to find out quadrature pixel frame (orthogonal window), and be weighted processing with this quadrature pixel frame toward the direction with pixel of maximum gray difference value Dmax.For example, find pixel R1 and average gray AvgR that maximum difference DR1 is arranged in step 313, the normal vector direction that then connects in line with pixel R1 and pixel Rt is set up the quadrature pixel frame, and for example this quadrature pixel frame comprises R3, Rt, R7.
Step 317
Computation bound coefficient ED.
This border coefficient ED is the boundary intensity that is representing the direction of quadrature pixel frame processing, larger then presentation video may have clearly border (or being called strong border) in that this side up, and less then presentation video is on this side up border more not obvious (or being called weak boundary).The reason of calculating this border coefficient ED is that the image processing produces noise or processes wrong situation than being easier on the border, therefore will do different weighting (weighting) for obvious border, not obvious border or non-border and process, these processing will be described in detail down below.
In this embodiment, the present invention proposes a kind of method of computation bound coefficient, its available following formula 1 expression:
ED=Dmax/Avg formula 1
As mentioned above, ED is border coefficient, and Dmax is maximum gray difference value, and Avg represents average gray.Why represent border coefficient with such formula, if be because maximum gray difference value Dmax is larger with respect to average gray Avg, then very possible representative has clearly a border to have that (for example chair pendulum is in the wall front, chair is obvious border with the wall intersection, and the difference between its pixel grey scale can be very large).If otherwise maximum gray difference value Dmax is less with respect to average gray Avg, then very possible representative has a less obvious border to exist, even there be (for example a whole sidewalls can be considered non-boundary and exist, and the difference between its pixel grey scale can be very little) in non-boundary.Therefore, by above-mentioned formula 1, whether even if Dmax and Avg are very little, still can rely on its ratio to estimate accurately has obvious border to exist.
Next seeing also Fig. 4 to understand following step, is the step 319 that is connected to Fig. 4 after the step 317 of Fig. 3.
Step 319
Judge that object pixel is to belong to the G pixel, or the R/B pixel one of them.If belong to the G pixel, then arrive step 321, if the R/B pixel one of them, then carry out step 323.
Step 321
One first critical value Th1 is set as the R/B critical value, and calculates one second critical value Th2 according to the first critical value Th1.Setting about critical value will be in rear explanation.
Step 323
One first critical value Th1 is set as the G critical value, and calculates one second critical value Th2 according to the first critical value Th1.Setting about critical value will be in rear explanation.
Generally speaking, noise occur on the R/B pixel and the degree that occurs on the G pixel different, when setting critical value, be that G pixel or R/B pixel are carried out different settings according to object pixel therefore.The noise that usually occurs on the R/B pixel can be greater than the noise that occurs on the G pixel, so be used for the critical value of R/B pixel more preferably greater than the critical value of G pixel.In this embodiment, the second critical value Th2 deducts a difference (Offset) with the first critical value Th1 to produce.This difference can produce according to the state on border.And, it is noted that, in this specification and claims employed the one the second and non-limiting its sequencing, only be in order to represent that it is different signal or parameter.
Step 325
Judge that whether border coefficient ED is less than the second critical value Th2, if then arrive step 327; Then arrive if not step 329.
Step 327
Use the quadrature pixel frame in the 3rd weighted value and the step 315, come that the pixel in the quadrature pixel frame is done weighting and process.In this embodiment, exist if border coefficient ED, then represents non-boundary less than the second critical value Th2, therefore can use to be fit to the weighted value that non-borderline region uses.
Step 329
Whether judge border coefficient ED less than the first critical value Th1, if then arrive step 333, then arrive step 331 if not.
Step 331
Use the quadrature pixel frame in the first weighted value and the step 315, come that the pixel in the quadrature pixel frame is done weighting and process.In this embodiment, may there be an obvious border to have (or being called strong border) if border coefficient ED greater than the first critical value Th1, then represents, therefore can uses to be fit to the weighted value that obvious borderline region uses.
Step 333
Use the quadrature pixel frame in the second weighted value and the step 315, come that the pixel in the quadrature pixel frame is done weighting and process.In this embodiment, may there be a less obvious border to have (or being called weak boundary) if border coefficient ED greater than the first critical value Th1, then represents, therefore can uses to be fit to the weighted value that less obvious borderline region uses.
The below will illustrate according to an embodiment of the invention weighting function, and it uses in the middle of step 327,331 and 333 with gray value after calculating weighting.It is to illustrate as an example with pixel Rt, R3 and the R7 shown in Fig. 2 a:
Gray value after the object pixel weighting:
Rt '=Wt* (R7+R3)+(1-Wt) * Rt=Wt (R7+R3-Rt)+Rt formula 2
Gray value can overlap and use all R pixels in the described quadrature pixel frame of step 315 after the weighting that calculates according to formula 2, to increase the effectiveness of eliminating noise, easily can only apply mechanically to object pixel Rt.
In one embodiment, the 3rd weighted value in the step 327 can be 0, i.e. Wt=0.Because fringing coefficient ED is less than the second critical value Th2, so can infer and not have edge (non-frontier district) in this Bayer pattern, then do not need the gray value of revise goal pixel Rt, gray value Rt ' after the object pixel weighting directly can be set as the gray value of object pixel Rt.
In one embodiment, the first weighted value in the step 331 is 0.9, i.e. Wt=0.1.Because greater than the first critical value Th1, there be extremely significantly edge (strong frontier district) in fringing coefficient ED so can infer, so when the gray value of revise goal Rt, can consider largely the GTG value of pixel R3 and R7 in light of actual conditions between object pixel Rt and the pixel R1.
In one embodiment, the second weighted value in step 333 is 0.3, i.e. Wt=0.3.Because fringing coefficient ED is between the first critical value Th1 and the second critical value Th2, there is not very significantly edge (weak boundary district) between object pixel Rt and the pixel R1 so can infer, so when the gray value of revise goal Rt, can consider simultaneously the GTG value of pixel Rt, R3 and R7 in light of actual conditions.
It is noted that formula 2 does not limit use in the R pixel, other pixel (for example B pixel, G pixel) also can be overlapped and is used in formula 2.And first, second, third weighted value also is not limited to 0,0.9, and 0.3.
It is noted that the above embodiments in order to for example, are not to limit the present invention only.For instance, applicable to other pattern outside the Bayer pattern, can use other pixel outside rgb pixel, such as YUV or other pixel.Also being not limited to process dissimilar pixels with different scopes, also is that step 303,305 and 307 can be deleted.Also being not limited to process dissimilar pixels with different critical values, also is that step 319,321 and 323 can be deleted.And, also be not limited to distinguish strong frontier district, weak boundary district and non-frontier district with two critical values, also can omit step 329 and 333, only be distinguished into non-frontier district (step 327) and frontier district (step 331), perhaps omit step 327, only be distinguished into strong frontier district (step 331) and weak boundary district (step 333).Above variation all should be within the scope of the present invention.
Referring again to Fig. 1, above-mentioned embodiment of the method is used when image processing system 100 shown in Figure 1, can a noise cancellation module implement, this noise cancellation module can between analog to digital converter 105 and image processing module 107, perhaps be positioned at after the image processing module 107.
According to the above embodiments, can calculate border coefficient and carry out different weighting actions according to different border coefficients with the different weights value, with the noise in eliminating the effects of the act.
The above only is preferred embodiment of the present invention, and all equalizations of doing according to the present patent application claim change and modify, and all should belong to covering scope of the present invention.

Claims (13)

1. image noise elimination method, wherein this image comprises a plurality of pixels, and these a plurality of pixels are to arrange with the form of Bayer pattern, it is characterized in that the method comprises:
(a) calculate the average gray of a plurality of particular type pixels in one first scope of this image;
(b) calculate the gray difference of those particular type pixels and this average gray to produce a plurality of gray difference values;
(c) find out the specific pixel with maximum gray difference value according to those gray difference values;
(d) determine one second scope according to the object pixel in this specific pixel and this first scope; And
(e) optionally utilize the pixel in this second scope to adjust the gray value of this object pixel to remove noise.
2. image noise elimination method according to claim 1, it is characterized in that, this object pixel is positioned at the center of this first scope, and is somebody's turn to do in (d) step, and this second scope is the normal vector that extends on the line according to this object pixel and this specific pixel and setting up.
3. image noise elimination method according to claim 2 is characterized in that, this second scope comprises of the same type with this object pixel and the most a plurality of pixels of close this object pixel.
4. image noise elimination method according to claim 1 is characterized in that, should (e) step also comprise:
(f) calculate a border coefficient; And
(g) if this border coefficient greater than one first critical value, then using one first weighted value that the gray value of this object pixel is carried out a weighting processes, if this border coefficient less than this first critical value, then uses one second weighted value that the gray value of this object pixel is carried out this weighting and processes.
5. image noise elimination method according to claim 4 is characterized in that, should (f) step also comprise a ratio value of this average gray of calculating and this maximum gray difference value to obtain this border coefficient.
6. image noise elimination method according to claim 1 is characterized in that, this particular type pixel be Y pixel, U pixel and V pixel one of them or this particular type pixel be R pixel, G pixel and B pixel one of them.
7. image noise elimination method according to claim 1 is characterized in that, this step (a) also comprises uses different these big or small first scopes to calculate the average gray of all pixels in this first scope.
8. image noise elimination method according to claim 4 is characterized in that, also comprises: set this first critical value according to the type of this particular type pixel.
9. image noise elimination method according to claim 1 is characterized in that, if the type of particular type pixel is R pixel or B pixel, then sets higher critical value, if the type of particular type pixel is the G pixel, then sets lower critical value.
10. image noise elimination method according to claim 4 is characterized in that, if also comprise this border coefficient less than one second critical value, then use one the 3rd weighted value to carry out this weighting and process, and this second critical value is less than this first critical value.
11. image noise elimination method according to claim 10 is characterized in that, this second critical value is to be deducted an adjusted value and produced by this first critical value.
12. image noise elimination method according to claim 4, it is characterized in that, if more comprise this border coefficient less than this first critical value greater than one second critical value, then using this second weighted value to carry out this weighting processes, if this border coefficient then uses one the 3rd weighted value to carry out this weighting less than this second critical value and processes, wherein this first critical value is greater than this second critical value.
13. image noise elimination method according to claim 4 is characterized in that, this step (g) is come the gray value of this object pixel is carried out this weighting processing with following formula:
The gray value of gray value=weighted value * after this object pixel weighting (gray value of the gray value of the first pixel+second pixel)+(1-weighted value) this object pixel of *, wherein this first pixel and the second pixel are to be positioned at this second scope.
CN 200910109108 2009-07-27 2009-07-27 Boundary coefficient calculation method of image and image noise removal method Active CN101969570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910109108 CN101969570B (en) 2009-07-27 2009-07-27 Boundary coefficient calculation method of image and image noise removal method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910109108 CN101969570B (en) 2009-07-27 2009-07-27 Boundary coefficient calculation method of image and image noise removal method

Publications (2)

Publication Number Publication Date
CN101969570A CN101969570A (en) 2011-02-09
CN101969570B true CN101969570B (en) 2013-04-17

Family

ID=43548650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910109108 Active CN101969570B (en) 2009-07-27 2009-07-27 Boundary coefficient calculation method of image and image noise removal method

Country Status (1)

Country Link
CN (1) CN101969570B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073169A2 (en) * 2007-12-18 2009-06-24 Sony Corporation Image processing apparatus and method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073169A2 (en) * 2007-12-18 2009-06-24 Sony Corporation Image processing apparatus and method, and program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JP特开2006-23959A 2006.01.26
JP特开2007-213124A 2007.08.23
JP特开平7-334686A 1995.12.22

Also Published As

Publication number Publication date
CN101969570A (en) 2011-02-09

Similar Documents

Publication Publication Date Title
EP2278788B1 (en) Method and apparatus for correcting lens shading
US8922680B2 (en) Image processing apparatus and control method for image processing apparatus
EP1924082B1 (en) Image processing apparatus and image processing method
US20080043120A1 (en) Image processing apparatus, image capture apparatus, image output apparatus, and method and program for these apparatus
US8120677B2 (en) Imaging apparatus, adjustment method of black level, and program
JP2011055038A5 (en)
US20090167904A1 (en) Image processing circuit, imaging apparatus, method and program
JP2006129236A (en) Ringing eliminating device and computer readable recording medium with ringing elimination program recorded thereon
US20100253811A1 (en) Moving image noise reduction processing apparatus, computer- readable recording medium on which moving image noise reduction processing program is recorded, and moving image noise reduction processing method
US20080273793A1 (en) Signal processing apparatus and method, noise reduction apparatus and method, and program therefor
KR100775104B1 (en) Image stabilizer and system having the same and method thereof
JP2011055038A (en) Image processor, image processing method, and program
JP2009021905A (en) Contour enhancement apparatus
JPWO2006134923A1 (en) Image processing apparatus, computer program product, and image processing method
WO2021004066A1 (en) Image processing method and apparatus
KR100780948B1 (en) Apparatus for removing noise of color signal
JP2010045534A (en) Solid-state image-capturing apparatus
US6563536B1 (en) Reducing noise in an imaging system
JP4328115B2 (en) Solid-state imaging device
JP2006101447A (en) Image pickup apparatus and image-restoring method
JP4637812B2 (en) Image signal processing apparatus, image signal processing program, and image signal processing method
JP3709981B2 (en) Gradation correction apparatus and method
TWI430649B (en) Edge parameter computing method for image and image noise omitting method utilizing the edge parameter computing method
US8599288B2 (en) Noise reduction apparatus, method and computer-readable recording medium
CN101969570B (en) Boundary coefficient calculation method of image and image noise removal method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: Chegongmiao Futian Tian'an Science and Technology Venture Park B901, B902 and B903, Futian District, Shenzhen City, Guangdong Province

Co-patentee after: Silicon Motion, Inc.

Patentee after: Huirong Technology (Shenzhen) Co.,Ltd.

Address before: 518040 Guangdong, Shenzhen, Futian District, Che Kung Temple Futian Tian Tian science and Technology Pioneer Park B901, B902, B903

Co-patentee before: Silicon Motion, Inc.

Patentee before: SILICON MOTION SHENZHEN Inc.

CP03 Change of name, title or address