CN103971368B - A kind of moving target foreground extracting method based on aberration - Google Patents

A kind of moving target foreground extracting method based on aberration Download PDF

Info

Publication number
CN103971368B
CN103971368B CN201410196311.XA CN201410196311A CN103971368B CN 103971368 B CN103971368 B CN 103971368B CN 201410196311 A CN201410196311 A CN 201410196311A CN 103971368 B CN103971368 B CN 103971368B
Authority
CN
China
Prior art keywords
model
color difference
background
foreground
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410196311.XA
Other languages
Chinese (zh)
Other versions
CN103971368A (en
Inventor
孙采鹰
李少波
颉新春
张勇
杨培宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Core Universe Tianjin Technology Co ltd
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN201410196311.XA priority Critical patent/CN103971368B/en
Publication of CN103971368A publication Critical patent/CN103971368A/en
Application granted granted Critical
Publication of CN103971368B publication Critical patent/CN103971368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a kind of moving target foreground extracting method based on aberration, methods described is using the region that brightness is foundation extraction moving object place, afterwards these regions are further screened, these regions are compared with the Model for chromatic aberration that previously set up, part only in the region with Model for chromatic aberration difference more than threshold value is just remembered as prospect, eliminates the impact of shade and illumination to moving target foreground extraction with this.The present invention can effectively extract moving target in video monitoring.

Description

Moving target foreground extraction method based on chromatic aberration
Technical Field
The invention relates to a moving target foreground extraction method based on chromatic aberration, and belongs to the technical field of video monitoring.
Background
With the increasing requirements of society and individuals on safety, video monitoring has become more and more popular, and video monitoring which only provides image data has become less and less capable of meeting the requirements, so that more and more video monitoring devices are embedded with intelligent detection modules.
The extraction of the moving foreground is usually the first step of intelligent monitoring, and then processing such as feature analysis is performed on the foreground.
Because a camera in a monitoring occasion generally monitors a fixed place and cannot move, a general moving target foreground extraction method is a background subtraction method aiming at the characteristic. The method comprises the steps of firstly collecting images as background models, and then subtracting the collected images of each frame from the background models, so as to obtain the moving foreground.
There are generally three stages of background subtraction: background modeling and training, foreground detection and background updating, and the method has limitations in the use process, for example, shadows cast by sunlight irradiating an object can also be extracted as the foreground, and although the brightness change of the shadows is larger than that of an actual moving object, the shadows are not the interested part.
The invention provides an improved statistical moving target background extraction method based on chromatic aberration, which can effectively inhibit the influence of illumination and shadow on background extraction.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a moving target foreground extraction method based on chromatic aberration, which can effectively extract a moving target in video monitoring.
In order to solve the problems, the invention adopts the following technical scheme:
the invention provides a moving target foreground extraction method based on color difference, which is characterized in that areas where moving objects are located are extracted by using brightness as a basis, then the areas are further screened, the areas are compared with a previously established color difference model, and only the part of the area, the difference value of which is greater than a threshold value, of the color difference model is regarded as a foreground, so that the influence of shadow and illumination on the moving target foreground extraction is eliminated.
The invention comprises the following steps:
1) establishing a background model:
setting a sequence threshold, when the frame number of image acquisition reaches the threshold, averaging the image sequence before the threshold, and establishing a brightness and color difference model based on the average value;
2) and extracting the motion foreground:
comparing the collected frame image with the background model obtained in the step 1), and when the condition is met, considering the pixel point as a foreground, thereby obtaining a binary image;
3) updating a background model:
the brightness of the monitoring environment changes along with time, and the background model is updated in real time; during updating, the non-background pixels are not updated, and the binary image extracted in the step 2) is used as a basis for updating the background pixels in the step;
4) shadow and illumination suppression:
counting the foreground extracted from the background after morphological operation, and recording the boundary value of each independent foreground area; performing connected region analysis on the result after background extraction to quickly search the boundary value of the independent foreground region so as to inhibit shadow and illumination;
5) and updating a color difference model:
in the original background, a new object entering the visual field for a long time will become a new background, and the color difference model established in the step 4) needs to be updated in real time.
The method comprises the following specific steps:
the method comprises the following steps: establishing a background model:
the background model is established by setting a sequence threshold, when the frame number of the image acquisition reaches the threshold, averaging the image sequence, and establishing a brightness and color difference model based on the average value. As shown in equation (1):
(1)
wherein,m and N are the width and height of the image, and N is the sequence threshold. When in useWhen it is a luminance signal, the model established is a luminance modelWhen the signal is a color difference signal, the established model is a color difference model. A global threshold is used to initialize the variance model.
Since the decimal fraction is frequently used in the subsequent calculation, and the DSP of the DM64x + series implementing the algorithm of the present invention is a fixed-point DSP, the fixed-point decimal format is used when building the background model.
Step two: extracting a motion foreground:
and (3) comparing the collected frame image with the background model obtained in the step one, and when the formula (2) is met, considering the pixel point as a foreground.
(2)
WhereinIs a model of the luminance background that is,is the model of variance at initialization time,is the variance model after model update. Since the result is a binary image, each bit represents a pixel of the image in order to save space.
Step three: updating a background model:
and updating the background in real time, including updating the background model and the variance model. The background real-time update is according to equation (3).
(3)
Wherein,is a model of the luminance background that is,is the background model after model update.Is the model of variance at initialization time,is the variance model after model update,is a pointThe value of the pixel of (a) is,is the weight coefficient of the update process.
Step four: shadow and illumination suppression:
the single chip for realizing the extraction algorithm is the DSP, and the core frequency of the DSP is lower than that of a CPU (central processing unit) of a PC (personal computer), so that the classical statistical background extraction needs to be properly improved to meet the requirements of a DSP platform. The classical statistical background extraction method is improved as follows: and counting the foreground extracted from the background after morphological operation, and recording the boundary value of each independent foreground area. And rapidly searching the boundary value of the independent foreground region by analyzing the communication domain of the result after the background extraction. The analysis of the communication domain requires the following steps:
a) two scans from top to bottom and from left to right are made for one frame of image. In the first scan, assuming that the first non-background point scanned is A (i, j), two adjacent pixel points of the left A (i-1, j) and the top A (i, j-1) are checked.
b) Assigning a new marker to A (i, j) if neither A (i-1, j) nor A (i, j-1) is marked;
c) if A (i-1, j) and A (i, j-1) have one marked, then giving A (i, j) the same marker;
d) if A (i-1, j) and A (i, j-1) are both marked then: if the two markers are the same, then A (i, j) is given the same marker, if the two markers are different, then the point A (i, j) is marked as one of the markers, and the two markers are kept equivalent.
e) Replace each marker in the equivalent table with the lowest marker in the equivalent table, thereby relabeling pixels belonging to the same region of connectivity that are labeled as different markers.
At this time, each individual connected region is searched, the upper, lower, left and right boundaries of the individual connected region are determined, and after the boundaries of the foregrounds are obtained, the rectangles corresponding to the foregrounds can be determined. And performing second background extraction on each rectangle according to the color difference signals.
According to the rectangular boundary provided by the connected region, the color difference signals including Cb and Cr signals in the rectangle are extracted from one frame image. The Cb and Cr color difference signals in the rectangle are subtracted from the Cb and Cr color difference background models, respectively. And compared to a color difference variance model. And if the color difference signals of the pixels in the rectangle meet the formula (4), the pixels are considered as the foreground of the color difference signals.
(4)
Wherein,is a color difference signal of a new frame of image,is a model of the background of the color difference,is a color difference variance model after real-time update,is the initial color difference variance model.
Step five: and (3) updating a color difference model:
considering that there may be a new background caused by the object entering the field of view for a long time, the established color difference model is updated in real time, and the updating of the color difference model follows the formula (5).
(5)
Wherein,is an initial color-difference background model,is a color difference background model which is updated in real time,is a color difference signal of a new frame of image,is an initial model of the variance of the color difference,the color difference variance model is updated in real time.
The extracted results of the formula (2) and the formula (4) are integrated, and the pixel points which meet the formula (2) and the formula (4) are real motion prospects.
Further, morphological operation is carried out on the binary image obtained after the integration in the fifth step, so that cavities can be filled, and noise interference is removed.
The invention has the advantages that the improved statistical background extraction method is provided aiming at the characteristic that the brightness of the moving object in the field environment is severely changed due to the external influence, and the method uses two color difference signals to establish a color difference model on the basis of the original classical statistical background extraction. Since the most significant change is the chromaticity after the object is illuminated, the chromaticity can reflect the characteristics of the moving object more than the brightness. And extracting the areas where the moving objects are located by using the brightness as a basis, further screening the areas, comparing the areas with a previously established color difference model, and regarding only the part of the area, the difference value of which with the color difference model is greater than a threshold value, as a foreground, so that the influence of shadow and illumination on the extraction of the foreground of the moving object is eliminated. The invention has better inhibition effect on shadow and illumination.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a moving object foreground extraction algorithm based on color difference according to the present invention;
FIG. 2 is a graph of the effect of classical statistical background extraction according to the present invention;
fig. 3 is a diagram of the effect of further extracting the moving foreground according to the color difference signal based on the extraction of the classical statistical background according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The method comprises the following steps: establishment of background model
The background model is established by setting a sequence threshold, when the frame number of the image acquisition reaches the threshold, averaging the image sequence, and establishing a brightness and color difference model based on the average value. As shown in equation (1):
(1)
wherein,m and N are the width and height of the image, and N is the sequence threshold. When in useWhen it is a luminance signal, the model established is a luminance modelWhen the signal is a color difference signal, the established model is a color difference model. A global threshold is used to initialize the variance model.
Since the decimal fraction is frequently used in the subsequent calculation, and the DSP of the DM64x + series implementing the algorithm of the present invention is a fixed-point DSP, the fixed-point decimal format is used when building the background model.
Step two: motion foreground extraction
And (3) comparing the collected frame image with the background model obtained in the step one, and when the formula (2) is met, considering the pixel point as a foreground.
(2)
WhereinIs a model of the luminance background that is,is the model of variance at initialization time,is the variance model after model update. Since the result is a binary image, each bit represents a pixel of the image in order to save space.
Step three: background model update
And updating the background in real time, including updating the background model and the variance model. The background real-time update is according to equation (3).
(3)
Wherein,is a model of the luminance background that is,is the background model after model update.Is the model of variance at initialization time,is the variance model after model update,is a pointThe value of the pixel of (a) is,is the weight coefficient of the update process.
Step four: shadow and illumination suppression
The single chip for realizing the extraction algorithm is the DSP, and the core frequency of the DSP is lower than that of a CPU (central processing unit) of a PC (personal computer), so that the classical statistical background extraction needs to be properly improved to meet the requirements of a DSP platform. The classical statistical background extraction method is improved as follows: and counting the foreground extracted from the background after morphological operation, and recording the boundary value of each independent foreground area. And rapidly searching the boundary value of the independent foreground region by analyzing the communication domain of the result after the background extraction. The analysis of the communication domain requires the following steps:
a) two scans from top to bottom and from left to right are made for one frame of image. In the first scan, assuming that the first non-background point scanned is A (i, j), two adjacent pixel points of the left A (i-1, j) and the top A (i, j-1) are checked.
b) Assigning a new marker to A (i, j) if neither A (i-1, j) nor A (i, j-1) is marked;
c) if A (i-1, j) and A (i, j-1) have one marked, then giving A (i, j) the same marker;
d) if A (i-1, j) and A (i, j-1) are both marked then: if the two markers are the same, then A (i, j) is given the same marker, if the two markers are different, then the point A (i, j) is marked as one of the markers, and the two markers are kept equivalent.
e) Replace each marker in the equivalent table with the lowest marker in the equivalent table, thereby relabeling pixels belonging to the same region of connectivity that are labeled as different markers.
At this time, each individual connected region is searched, the upper, lower, left and right boundaries of the individual connected region are determined, and after the boundaries of the foregrounds are obtained, the rectangles corresponding to the foregrounds can be determined. And performing second background extraction on each rectangle according to the color difference signals.
According to the rectangular boundary provided by the connected region, the color difference signals including Cb and Cr signals in the rectangle are extracted from one frame image. The Cb and Cr color difference signals in the rectangle are subtracted from the Cb and Cr color difference background models, respectively. And compared to a color difference variance model. And if the color difference signals of the pixels in the rectangle meet the formula (4), the pixels are considered as the foreground of the color difference signals.
(4)
Wherein,is a color difference signal of a new frame of image,is a model of the background of the color difference,is a color difference variance model after real-time update,is the initial color difference variance model.
Step five: color difference model update
Considering that there may be a new background caused by the object entering the field of view for a long time, the established color difference model is updated in real time, and the updating of the color difference model follows the formula (5).
(5)
Wherein,is an initial color-difference background model,is a color difference background model which is updated in real time,is a color difference signal of a new frame of image,is an initial model of the variance of the color difference,the color difference variance model is updated in real time.
The extracted results of the formula (2) and the formula (4) are integrated, the pixel points which meet the formula (2) and the formula (4) are real motion prospects, finally, the binary image obtained after integration is subjected to morphological operation, cavities can be filled, and noise interference is removed.
Finally, it should be noted that: it should be understood that the above examples are only for clearly illustrating the present invention and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.

Claims (2)

1. A moving target foreground extraction method based on chromatic aberration is characterized by comprising the following specific steps:
the method comprises the following steps: establishing a background model:
the establishment of the background model firstly needs to set a sequence threshold, when the frame number of the image acquisition reaches the threshold, the image sequence in the front is used for averaging, and a brightness and color difference model is established by taking the average value as the basis; as shown in equation (1):
f ′ ( x , y ) = Σ i = 1 N f ( x , y ) N - - - ( 1 )
wherein x is more than or equal to 1 and less than or equal to m, y is more than or equal to 1 and less than or equal to N, m and N are the width and height of the image, and N is a sequence threshold; when f (x, y) is a luminance signal, the established model is a luminance model, and when f (x, y) is a color difference signal, the established model is a color difference model; initializing the variance model using a global threshold;
since decimal is frequently used in subsequent calculation, and the DM64x + series of DSPs for realizing the algorithm are fixed-point DSPs, a fixed-point decimal format is used when a background model is established;
step two: extracting a motion foreground:
comparing the collected image of one frame with the background model obtained in the first step, and considering the pixel point as a foreground when the formula (2) is met;
1.(f(x,y)-B(x,y))2>V′(x,y)
2.(f(x,y)-B(x,y))2>V(x,y) (2)
wherein B (x, y) is a brightness background model, V (x, y) is an variance model during initialization, and V' (x, y) is an variance model after model updating; since the result is a binary image, to save space, each bit represents a pixel of the image;
step three: updating a background model:
updating the background in real time, including updating a background model and a variance model; updating the background in real time according to a formula (3);
B′(x,y)=(1-α)·B(x,y)+α·f(x,y)
V′(x,y)=(1-α)·V(x,y)+α·(f(x,y)-B′(x,y))2(3)
wherein, B (x, y) is a brightness background model, and B' (x, y) is a background model after model updating; v (x, y) is a variance model at the time of initialization, V' (x, y) is a variance model after model update, f (x, y) is a pixel value at a point (x, y), and α is a weight coefficient of the update process;
step four: shadow and illumination suppression:
the single chip for realizing the extraction algorithm is a DSP, and the core frequency of the DSP is lower than that of a CPU (central processing unit) of a PC (personal computer), so that the classical statistical background extraction needs to be properly improved to meet the requirements of a DSP platform; the classical statistical background extraction method is improved as follows: counting the foreground extracted from the background after morphological operation, and recording the boundary value of each independent foreground area; performing communication domain analysis on the result after background extraction to quickly search the boundary value of the independent foreground region; the analysis of the communication domain requires the following steps:
a) scanning a frame of image from top to bottom and from left to right twice; in the first scanning, assuming that the first non-background point scanned is A (i, j), two adjacent pixel points of the left A (i-1, j) and the upper A (i, j-1) of the scanning are checked;
b) assigning a new marker to A (i, j) if neither A (i-1, j) nor A (i, j-1) is marked;
c) if A (i-1, j) and A (i, j-1) have one marked, then giving A (i, j) the same marker;
d) if A (i-1, j) and A (i, j-1) are both marked then: if the two markers are the same, giving A (i, j) the same marker, if the two markers are different, marking the point A (i, j) as one of the markers, and simultaneously keeping the two markers equivalent;
e) replacing each marker in the equivalent table with the lowest marker in the equivalent table, thereby relabeling pixels belonging to the same region of connectivity marked as different markers;
searching each single connected region, determining the upper, lower, left and right boundaries of the single connected region, and determining the rectangle corresponding to each foreground after the boundaries of the foregrounds are obtained; performing second background extraction on each rectangle according to the color difference signals;
extracting color difference signals including Cb and Cr signals in a rectangle from a frame of image according to the rectangular boundary provided by the connected region; respectively subtracting Cb and Cr color difference background models from Cb and Cr color difference signals in the rectangle; and comparing with a color difference variance model; if the color difference signals of the pixels in the rectangle meet the formula (4), the pixels are considered as the foreground of the color difference signals;
1. ( f c ( x , y ) - B c ( x , y ) ) 2 > V c ′ ( x , y ) 2. ( f c ( x , y ) - B c ( x , y ) ) 2 > V c ( x , y ) - - - ( 4 )
wherein f isc(x, y) is a color difference signal of a new frame image, Bc(x, y) is the color-difference background model, V'c(x, y) is the color difference variance model after real-time update, Vc(x, y) is an initial color difference variance model;
step five: and (3) updating a color difference model:
considering that an object possibly enters the visual field for a long time and becomes a new background, the established chromatic aberration model is updated in real time, and the updating of the chromatic aberration model follows the formula (5);
B c ′ ( x , y ) = ( 1 - β ) · B c ( x , y ) + β · f c ( x , y ) V c ′ ( x , y ) = ( 1 - β ) · V c ( x , y ) + β · ( f c ( x , y ) - B c ′ ( x , y ) ) 2 - - - ( 5 )
wherein, Bc(x, y) is the initial color-difference background model, B'c(x, y) is the color difference background model after real-time update, fc(x, y) is a color difference signal of a new frame image, Vc(x, y) is the initial color difference variance model, V'c(x, y) is a color difference variance model after real-time update, and β is a weight coefficient of the color difference model updating process;
the extracted results of the formula (2) and the formula (4) are integrated, and the pixel points which meet the formula (2) and the formula (4) are real motion prospects.
2. The moving object foreground extraction method based on color difference as claimed in claim 1 wherein: and performing morphological operation on the binary image obtained after the integration in the fifth step, filling the hole and removing noise interference.
CN201410196311.XA 2014-05-12 2014-05-12 A kind of moving target foreground extracting method based on aberration Active CN103971368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410196311.XA CN103971368B (en) 2014-05-12 2014-05-12 A kind of moving target foreground extracting method based on aberration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410196311.XA CN103971368B (en) 2014-05-12 2014-05-12 A kind of moving target foreground extracting method based on aberration

Publications (2)

Publication Number Publication Date
CN103971368A CN103971368A (en) 2014-08-06
CN103971368B true CN103971368B (en) 2017-03-15

Family

ID=51240817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410196311.XA Active CN103971368B (en) 2014-05-12 2014-05-12 A kind of moving target foreground extracting method based on aberration

Country Status (1)

Country Link
CN (1) CN103971368B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993778A (en) * 2019-04-11 2019-07-09 浙江立元通信技术股份有限公司 A kind of method and device of determining target position

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN102054270A (en) * 2009-11-10 2011-05-11 华为技术有限公司 Method and device for extracting foreground from video image
CN103679704A (en) * 2013-11-22 2014-03-26 中国人民解放军第二炮兵工程大学 Video motion shadow detecting method based on lighting compensation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4803304B2 (en) * 2010-02-22 2011-10-26 カシオ計算機株式会社 Image processing apparatus and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN102054270A (en) * 2009-11-10 2011-05-11 华为技术有限公司 Method and device for extracting foreground from video image
CN103679704A (en) * 2013-11-22 2014-03-26 中国人民解放军第二炮兵工程大学 Video motion shadow detecting method based on lighting compensation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李光才.基于视频监控的运动车辆流检测方法的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2008,第2008年卷(第9期),第31页最后一段、第33页. *
秦秀丽.基于YUV颜色空间和图论切割的阴影去除算法.《中国优秀硕士学位论文全文数据库 信息科技辑》.2010,第2010年卷(第12期),第4.2.2、3.1.3节,第32页、24页最后一段、41-42页. *

Also Published As

Publication number Publication date
CN103971368A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
WO2020224458A1 (en) Method for detecting corona discharge employing image processing
CN102568005B (en) Moving object detection method based on Gaussian mixture model
CN105404847B (en) A kind of residue real-time detection method
WO2018086299A1 (en) Image processing-based insulator defect detection method and system
CN106548160A (en) A kind of face smile detection method
CN106682665B (en) Seven-segment type digital display instrument number identification method based on computer vision
CN104166983A (en) Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut
CN102968782A (en) Automatic digging method for remarkable objects of color images
CN111310768B (en) Saliency target detection method based on robustness background prior and global information
WO2012005461A2 (en) Method for automatically calculating information on clouds
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
CN106447673A (en) Chip pin extraction method under non-uniform illumination condition
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN110782409B (en) Method for removing shadow of multiple moving objects
CN108961230A (en) The identification and extracting method of body structure surface FRACTURE CHARACTERISTICS
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN110807738A (en) Fuzzy image non-blind restoration method based on edge image block sharpening
CN114881869A (en) Inspection video image preprocessing method
CN111489333A (en) No-reference night natural image quality evaluation method
CN114494887A (en) Remote sensing image classification method and device, computer equipment and readable storage medium
CN103971368B (en) A kind of moving target foreground extracting method based on aberration
CN108830834B (en) Automatic extraction method for video defect information of cable climbing robot
CN106228553A (en) High-resolution remote sensing image shadow Detection apparatus and method
CN111401121A (en) Method for realizing citrus segmentation based on super-pixel feature extraction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Sun Caiying

Inventor after: Li Shaobo

Inventor after: New year

Inventor after: Zhang Yong

Inventor after: Yang Peihong

Inventor before: Sun Caiying

Inventor before: Lan Xiaowen

Inventor before: Dong Daming

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231229

Address after: 301500 Tianjin Binhai New Area Economic and Technological Development Zone Binhai Zhongguancun Science and Technology Park Datang Base East Area Building 2 Unit 7 301-19

Patentee after: Core Universe (Tianjin) Technology Co.,Ltd.

Address before: 014010 No.7 Alwen Street, Kunqu District, Baotou City, Inner Mongolia Autonomous Region

Patentee before: INNER MONGOLIA University OF SCIENCE AND TECHNOLOGY