CN103150550B - A kind of road pedestrian event detection method based on gripper path analysis - Google Patents

A kind of road pedestrian event detection method based on gripper path analysis Download PDF

Info

Publication number
CN103150550B
CN103150550B CN201310045531.8A CN201310045531A CN103150550B CN 103150550 B CN103150550 B CN 103150550B CN 201310045531 A CN201310045531 A CN 201310045531A CN 103150550 B CN103150550 B CN 103150550B
Authority
CN
China
Prior art keywords
block
pixel
value
threshold value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310045531.8A
Other languages
Chinese (zh)
Other versions
CN103150550A (en
Inventor
宋焕生
张骁
徐晓娟
李文敏
闫国伟
刘冬妹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Dewei Shitong Intelligent Technology Co ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201310045531.8A priority Critical patent/CN103150550B/en
Publication of CN103150550A publication Critical patent/CN103150550A/en
Application granted granted Critical
Publication of CN103150550B publication Critical patent/CN103150550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of road pedestrian event detection method based on gripper path analysis, by being divided into each piece in multiple pieces of area images, the background block identical with this block position is found in background image, calculate the absolute value sum of gray scale difference value and assignment, determine object block, find best angle point, obtain unique point, create an object construction body simultaneously, record these clarification of objective point position and matched jamming counter informations, search in current frame image by template, repeat said process, obtain the pursuit path of target, by searching mapping table, obtain the actual range that pursuit path is corresponding, try to achieve the speed of pursuit path, judge whether this target is pedestrian.Detection method of the present invention, can detect pedestrian targets all in range of video, not by environmental restraint, can detect real-time video, and detection time short, be easy to realize, accuracy is higher, be well suited for real-time detect lines people event, have broad application prospects.

Description

A kind of road pedestrian event detection method based on gripper path analysis
Technical field
The invention belongs to field of video detection, be specifically related to a kind of road pedestrian event detection method based on gripper path analysis.
Background technology
Road pedestrian event refers to that pedestrian enters without any when safeguard measure on car lane, the behavior that jammer motor-car normally travels.Although traffic control department takes measures, the situation that pedestrian swarms into car lane happens occasionally, and its danger is very large, easily causes traffic congestion, even leads to traffic hazard, cause serious impact to people's normal life.Traditional pedestrian event detection method mainly contains temperature checking method, electronic coil detection method, digital video detection method, and wherein temperature checking method is easily subject to vehicle interference; Electronic coil poor expandability, must suspend traffic during installation and maintenance, destroy road surface, and these methods can not be used widely in real life.
Current new project adopts installation more and more, safeguard do not need to destroy roadbed, surveyed area large, implement the convenient, flexible transport information detection technique based on video.Pedestrian detection method based on video becomes the focus of research, and existing method mainly contains based on neural network pedestrian detection, based on the template matches detection method etc. of wavelet transformation.Although these methods can realize pedestrian's affair alarm, the complex disposal process of video data, poor reliability, can not meet the requirement of real-time of detection, cannot meet the requirement of practical application.
Summary of the invention
For shortcomings and deficiencies of the prior art, the object of the invention is to, provide a kind of road pedestrian event detection method based on gripper path analysis, the method can realize detecting in real time, reliably to pedestrian's events all in range of video.
In order to realize above-mentioned task, the present invention adopts following technical scheme to be achieved:
Based on a road pedestrian event detection method for gripper path analysis, the method is carried out according to following steps:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table;
Step 2, first two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, and the size of background is W*H, and the block size of division is w*h, the block areal T divided is T=(W/w) * (H/h), all pixel value B in i-th block irepresent, total number N=w*h, the B of pixel in i-th block imiddle preservation w*h pixel value, with the lower left corner of i-th block for initial point sets up right angle two-dimensional coordinate system Y, the pixel value B that in i-th block, (m, n) puts i(m, n) represents, wherein:
W is the pixel in background level direction;
H is the pixel of background vertical direction;
W is the pixel of the Width of i-th block;
H is the pixel of the short transverse of i-th block;
i=1,2,3...T;
M represents the horizontal ordinate of arbitrary pixel under coordinate system Y in i-th block, m=0,1,2...w-1;
N represents the ordinate of arbitrary pixel under coordinate system Y in i-th block, n=0,1,2...h-1;
Step 3, to each piece in the first two field picture, finds the background block identical with this block position in background image, and calculates the absolute value sum of the gray scale difference value of each same pixel position between this block its corresponding background block;
When the absolute value sum of gained is greater than the threshold value A of setting, then this block is object block, is 255 by the gray-scale value assignment of pixels all in this object block,
When the absolute value sum of gained is less than or equal to the threshold value A of setting, then this block is background block, is 0 by the gray-scale value assignment of pixels all in background block, wherein:
The span of described threshold value A is the area of (10 ~ 20) × block;
Finally the background in the first two field picture and target are separated, obtain the binary image of the first two field picture;
Step 4, to the binary image obtained according to from left to right, order from top to bottom scans in units of block, and adjacent object block is labeled as same target, calculates height and the width of each target-marking simultaneously, when the value of height/width is within the scope of threshold value B, rim detection is carried out to this binary image, finds best angle point, namely when laterally detect data with longitudinally detect data be greater than threshold value T simultaneously time, judge that this point is as best angle point, wherein:
The scope of described threshold value B is 2 ~ 10;
The value of described threshold value T is 180;
Step 5, to select in these angle points laterally to detect data and longitudinally detect data and minimum point as clarification of objective point, create an object construction body simultaneously, record these clarification of objective point position and matched jamming counter informations, matched jamming counter R first time is initialized as zero, and unique point coordinate is (x 1, y 1), record the image of a block size centered by this unique point as template, the pixel value two-dimensional array B of all pixels in this template simultaneously t[N] represents, in this block, the pixel value of pixel (m, n) is B t(m, n);
Step 6, in the second two field picture, with the image block B of the first frame recording t[N] is template, with the characteristic point position (x of this template 1, y 1) centered by, the square area choosing 4 block sizes in current frame image is as region of search, and according to from left to right, method from top to bottom, searches for one by one, and the number of to be searched piece is designated as N, uses B j[N] represents any one the search block in current search region, obtains both definitely value difference and SAD,
Wherein:
SAD = Σ m = 0 w - 1 Σ n = 0 h - 1 | B j ( m , n ) - B t ( m , n ) |
j=1,2,3....N
Minimum as matching criterior using absolute length chang, choose N number of absolute value difference and in minimum block be match block, be designated as B s[N], namely have found matching characteristic point in the current frame, records the position (x of new matching characteristic point simultaneously 2, y 2), and in order to the block image B centered by new unique point s[N] is as new template B t[N], matched jamming counter R adds 1 simultaneously;
Step 7, from the 3rd two field picture to M two field picture, M be greater than 60 positive integer, repeat step 6, when R equals threshold value C, perform step 8;
Step 8, when matched jamming counter R equals threshold value C, all unique point (x 1, y 1) ... (x 60, y 60) form the pursuit path of target, by the mapping table set up in finding step one, obtain the actual range that pursuit path is corresponding, consecutive point be spaced apart every frame time, so can obtain the array of one group of time and actual range, utilize least square fitting can try to achieve the speed of pursuit path, when this speed meets the scope of threshold value D, namely determine that this target is for pedestrian, wherein:
The value of described threshold value C is 60;
The scope of described threshold value D is (0.3 ~ 2.0) m/s.
Road pedestrian event detection method based on video of the present invention, compared with prior art, can detect pedestrian targets all in range of video, not by environmental restraint, can detect real-time video, and detection time short, be easy to realize, accuracy is higher, be well suited for real-time detect lines people event, have broad application prospects.
Accompanying drawing explanation
Fig. 1 is the first frame video image.
Fig. 2 is connected component labeling schematic diagram, and in figure, a is first piece of connected domain, and b is second piece of connected domain.
Fig. 3 is the first two field picture binaryzation mark result images, and in figure, white portion is the binaryzation target-marking of present frame.
Fig. 4 is the second frame video image, and the white crosses point in figure is the target signature point asked for.
Fig. 5 is slip scan schematic diagram, and solid-line rectangle frame is to be searched piece, and dotted line is held and represented region of search, and its central point A is the position of previous frame unique point, and solid-line rectangle frame is step-length according to a pixel, according to from left to right, and slip scan from top to bottom.
Fig. 6 is the 60th frame binaryzation signature, and the white lines in figure are pursuit path line.
Fig. 7 is the actual motion geometric locus figure that in Fig. 6, pursuit path is corresponding, and the horizontal ordinate in figure is the time, and the unit interval is 0.04s, and ordinate is actual range, and unit is cm.
Fig. 8 is pedestrian's event detection outcome.
Below in conjunction with drawings and Examples, content of the present invention is described in further detail.
Embodiment
The present embodiment provides a kind of road pedestrian event detection method based on gripper path analysis, is chosen, target trajectory matched jamming and ask for pedestrian target speed with least square fitting thus determine whether pedestrian's event by block-based binarization segmentation, block-based connected component labeling, unique point.It should be noted that, image handled in procedure of the present invention be in video along positive seasonal effect in time series first two field picture, the second two field picture, the 3rd two field picture ..., M (M is positive integer) two field picture.
If the size of each frame video image is W*H, the size of each piece is w*h, and wherein W is the pixel in each frame video video image level direction, and H is the pixel of each frame video image vertical direction, w is the width in each piece of region, and h is the height in each piece of region.
It should be noted that the mapping table in the present embodiment adopts the video camera geometric calibration method described in patent of invention " a kind of video camera geometric calibration method under linear model " (open (bulletin) number: CN102222332A) to obtain.
The method of the present embodiment specifically adopts following steps to realize:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table;
Step 2, first two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, and the size of background is W*H, and the block size of division is w*h, the block areal T divided is T=(W/w) * (H/h), all pixel value B in i-th block irepresent, total number N=w*h, the B of pixel in i-th block imiddle preservation w*h pixel value, with the lower left corner of i-th block for initial point sets up right angle two-dimensional coordinate system Y, the pixel value B that in i-th block, (m, n) puts i(m, n) represents, wherein:
W is the pixel in background level direction;
H is the pixel of background vertical direction;
W is the pixel of the Width of i-th block;
H is the pixel of the short transverse of i-th block;
i=1,2,3...T;
M represents the horizontal ordinate of arbitrary pixel under coordinate system Y in i-th block, m=0,1,2...w-1;
N represents the ordinate of arbitrary pixel under coordinate system Y in i-th block, n=0,1,2...h-1;
Step 3, to each piece in the first two field picture, finds the background block identical with this block position in background image, and calculates the absolute value sum of the gray scale difference value of each same pixel position between this block its corresponding background block,
When the absolute value sum of gained is greater than the threshold value A of setting, then this block is object block, is 255 by the gray-scale value assignment of pixels all in this object block,
When the absolute value sum of gained is less than or equal to the threshold value A of setting, then this block is background block, is 0 by the gray-scale value assignment of pixels all in background block, wherein:
The span of described threshold value A is the area of (10 ~ 20) × block;
Finally the background in the first two field picture and target are separated, obtain the binary image of the first two field picture;
Step 4, to the binary image obtained according to from left to right, order from top to bottom scans in units of block, and adjacent object block is labeled as same target, calculate height and the width of each target-marking simultaneously, when the value of height/width is within the scope of threshold value B, rim detection is carried out to this binary image, find best angle point, namely when laterally detect data with longitudinally detect data be greater than threshold value T simultaneously time, the value of described threshold value T is 180, just judges that this point is as best angle point, wherein:
The scope of described threshold value B is 2 ~ 10;
Step 5, to select in these angle points laterally to detect data and longitudinally detect data and minimum point as clarification of objective point, create an object construction body simultaneously, record these clarification of objective point position and matched jamming counter informations, matched jamming counter R first time is initialized as zero, and unique point coordinate is (x 1, y 1), record the image of a block size centered by this unique point as template, the pixel value two-dimensional array B of all pixels in this template simultaneously t[N] represents, in this block, the pixel value of pixel (m, n) is B t(m, n);
Step 6, in the second two field picture, and with the image block B of the first frame recording t[N] is template, with the characteristic point position (x of this template 1, y 1) centered by, choose the square area of 4 block sizes as region of search at current frame image, according to from left to right, method from top to bottom, searches for one by one, and the number of to be searched piece is designated as N, uses B j[N] represents any one the search block in current search region, obtains both definitely value difference and SAD,
Wherein:
SAD = Σ m = 0 w - 1 Σ n = 0 h - 1 | B j ( m , n ) - B t ( m , n ) |
j=1,2,3....N
Minimum as matching criterior using absolute length chang, choose N number of absolute value difference and in minimum block be match block, be designated as B s[N], namely have found matching characteristic point in the current frame, records the position (x of new matching characteristic point simultaneously 2, y 2), and in order to the block image B centered by new unique point s[N] is as new template B t[N], matched jamming counter R adds 1 simultaneously;
Step 7, from the 3rd two field picture to M two field picture, M be more than or equal to 60 positive integer, repeat step 6, when R equals threshold value C, perform step 8;
Step 8, when matched jamming counter R equals threshold value C, all unique point (x 1, y 1) ... (x 60, y 60) form the pursuit path of target, by the mapping table set up in finding step one, obtain the actual range that pursuit path is corresponding, consecutive point be spaced apart every frame time, so can obtain the array of one group of time and actual range, utilize least square fitting can try to achieve the speed of pursuit path, when this speed meets the scope of threshold value D, namely determine that this target is for pedestrian, wherein:
The value of described threshold value C is 60;
The scope of described threshold value D is (0.3 ~ 2.0) m/s;
Below provide specific embodiments of the invention, it should be noted that the present invention is not limited to following specific embodiment, all equivalents done on technical scheme basis all fall into protection scope of the present invention.
Embodiment:
In embodiment, in processing procedure, the sample frequency of video is that 25 frames are per second, every two field picture size is 720 × 288, the size in every block region is 8 × 6, two field picture is divided into 90 × 48 block regions, and target area binarization segmentation threshold value A is 576, the scope of threshold value B is 2 ~ 10, the value of threshold value T be 180, threshold value C value be 60, threshold value D scope be (0.3 ~ 2.0) m/s, as shown in Figures 1 to 8, defer to said method to process to the 60 two field picture the first frame successively.
As can be seen from Figure 6 in figure white wire be pedestrian from the first frame to the movement locus of the 60 frame, the lower end of this track enters scene, the characteristic point position found for pedestrian first time, and topmost point is that the 60th frame mates the unique point found.
Fig. 7 is the actual range curve map that Fig. 6 pursuit path is corresponding, adopt least square method to this section of track fitting, the actual motion speed 1.46m/s of pedestrian can be tried to achieve, as shown in Figure 8, this movement velocity is in the scope of threshold value D, and therefore testing result is pedestrian's event of this road.

Claims (1)

1. based on a road pedestrian event detection method for gripper path analysis, it is characterized in that, the method is carried out according to following steps:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table;
Step 2, first two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, and the size of background is W*H, and the block size of division is w*h, the block areal T divided is T=(W/w) * (H/h), all pixel value B in i-th block irepresent, total number N=w*h, the B of pixel in i-th block imiddle preservation w*h pixel value, with the lower left corner of i-th block for initial point sets up right angle two-dimensional coordinate system Y, the pixel value B that in i-th block, (m, n) puts i(m, n) represents, wherein:
W is the pixel in background level direction;
H is the pixel of background vertical direction;
W is the pixel of the Width of i-th block;
H is the pixel of the short transverse of i-th block;
i=1,2,3...T;
M represents the horizontal ordinate of arbitrary pixel under coordinate system Y in i-th block, m=0,1,2...w-1;
N represents the ordinate of arbitrary pixel under coordinate system Y in i-th block, n=0,1,2...h-1;
Step 3, to each piece in the first two field picture, finds the background block identical with this block position in background image, and calculates the absolute value sum of the gray scale difference value of each same pixel position between this block its corresponding background block,
When the absolute value sum of gained is greater than the threshold value A of setting, then this block is object block, is 255 by the gray-scale value assignment of pixels all in this object block;
When the absolute value sum of gained is less than or equal to the threshold value A of setting, then this block is background block, is 0 by the gray-scale value assignment of pixels all in background block, wherein:
The span of described threshold value A is the area of (10 ~ 20) × block;
Finally the background in the first two field picture and target are separated, obtain the binary image of the first two field picture;
Step 4, to the binary image obtained according to from left to right, order from top to bottom scans in units of block, and adjacent object block is labeled as same target, calculates height and the width of each target-marking simultaneously, when the value of height/width is within the scope of threshold value B, rim detection is carried out to this binary image, finds best angle point, namely when laterally detect data with longitudinally detect data be greater than threshold value T simultaneously time, judge that this point is as best angle point, wherein:
The scope of described threshold value B is 2 ~ 10;
The value of described threshold value T is 180;
Step 5, to select in these angle points laterally to detect data and longitudinally detect data and minimum point as clarification of objective point, create an object construction body simultaneously, record these clarification of objective point position and matched jamming counter informations, matched jamming counter R first time is initialized as zero, and unique point coordinate is (x 1, y 1), record the image of a block size centered by this unique point as template, the pixel value two-dimensional array B of all pixels in this template simultaneously t[N] represents, in this block, the pixel value of pixel (m, n) is B t(m, n);
Step 6, in the second two field picture, with the image block B of the first frame recording t[N] is template, with the characteristic point position (x of this template 1, y 1) centered by, the square area choosing 4 block sizes in current frame image is as region of search, and according to from left to right, method from top to bottom, searches for one by one, and the number of to be searched piece is designated as N, uses B j[N] represents any one the search block in current search region, obtains both definitely value difference and SAD,
Wherein:
SAD = Σ m = 0 w - 1 Σ n = 0 h - 1 | B j ( m , n ) - B t ( m , n ) |
j=1,2,3....N
Minimum as matching criterior using absolute length chang, choose N number of absolute value difference and in minimum block be match block, be designated as B s[N], namely have found matching characteristic point in the current frame, records the position (x of new matching characteristic point simultaneously 2, y 2), and in order to the block image B centered by new unique point s[N] is as new template B t[N], matched jamming counter R adds 1 simultaneously;
Step 7, from the 3rd two field picture to M two field picture, M be greater than 60 positive integer, repeat step 6, when R equals threshold value C, perform step 8;
Step 8, when matched jamming counter R equals threshold value C, all unique point (x 1, y 1) ... (x 60, y 60) form the pursuit path of target, by the mapping table set up in finding step one, obtain the actual range that pursuit path is corresponding, consecutive point be spaced apart every frame time, so can obtain the array of one group of time and actual range, utilize least square fitting can try to achieve the speed of pursuit path, when this speed meets the scope of threshold value D, namely determine that this target is for pedestrian, wherein:
The value of described threshold value C is 60;
The scope of described threshold value D is (0.3 ~ 2.0) m/s.
CN201310045531.8A 2013-02-05 2013-02-05 A kind of road pedestrian event detection method based on gripper path analysis Active CN103150550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310045531.8A CN103150550B (en) 2013-02-05 2013-02-05 A kind of road pedestrian event detection method based on gripper path analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310045531.8A CN103150550B (en) 2013-02-05 2013-02-05 A kind of road pedestrian event detection method based on gripper path analysis

Publications (2)

Publication Number Publication Date
CN103150550A CN103150550A (en) 2013-06-12
CN103150550B true CN103150550B (en) 2015-10-28

Family

ID=48548613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310045531.8A Active CN103150550B (en) 2013-02-05 2013-02-05 A kind of road pedestrian event detection method based on gripper path analysis

Country Status (1)

Country Link
CN (1) CN103150550B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898107B (en) * 2016-04-21 2019-01-25 北京格灵深瞳信息技术有限公司 A kind of target object grasp shoot method and system
SG11202010040YA (en) * 2018-04-10 2020-11-27 Mgi Tech Co Ltd Fluorescence image registration method, gene sequencing instrument and system, and storage medium
CN111563489A (en) * 2020-07-14 2020-08-21 浙江大华技术股份有限公司 Target tracking method and device and computer storage medium
CN112614155B (en) * 2020-12-16 2022-07-26 深圳市图敏智能视频股份有限公司 Passenger flow tracking method
CN113408333B (en) * 2021-04-27 2022-10-11 上海工程技术大学 Method for distinguishing pedestrian traffic behaviors in subway station based on video data
CN113158953B (en) * 2021-04-30 2022-11-25 青岛海信智慧生活科技股份有限公司 Personnel searching method, device, equipment and medium
CN118247804A (en) * 2022-12-16 2024-06-25 中兴通讯股份有限公司 Security alarm method, terminal and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622582A (en) * 2012-02-21 2012-08-01 长安大学 Road pedestrian event detection method based on video
CN102509306B (en) * 2011-10-08 2014-02-19 西安理工大学 Specific target tracking method based on video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509306B (en) * 2011-10-08 2014-02-19 西安理工大学 Specific target tracking method based on video
CN102622582A (en) * 2012-02-21 2012-08-01 长安大学 Road pedestrian event detection method based on video

Also Published As

Publication number Publication date
CN103150550A (en) 2013-06-12

Similar Documents

Publication Publication Date Title
CN103150550B (en) A kind of road pedestrian event detection method based on gripper path analysis
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
CN102622886B (en) Video-based method for detecting violation lane-changing incident of vehicle
CN106935035B (en) Parking offense vehicle real-time detection method based on SSD neural network
Huang Traffic speed estimation from surveillance video data
CN104200657B (en) A kind of traffic flow parameter acquisition method based on video and sensor
US12002225B2 (en) System and method for transforming video data into directional object count
CN103226834B (en) A kind of image motion target signature point method for fast searching
CN103425764B (en) Vehicle matching method based on videos
Pan et al. Traffic surveillance system for vehicle flow detection
CN102609720B (en) Pedestrian detection method based on position correction model
CN109697420A (en) A kind of Moving target detection and tracking towards urban transportation
CN105551264A (en) Speed detection method based on license plate characteristic matching
CN104282020A (en) Vehicle speed detection method based on target motion track
CN105825185A (en) Early warning method and device against collision of vehicles
CN103150908B (en) Average vehicle speed detecting method based on video
CN101094413A (en) Real time movement detection method in use for video monitoring
CN113505638B (en) Method and device for monitoring traffic flow and computer readable storage medium
CN103208190A (en) Traffic flow detection method based on object detection
Wu et al. Adjacent lane detection and lateral vehicle distance measurement using vision-based neuro-fuzzy approaches
CN113887304A (en) Road occupation operation monitoring method based on target detection and pedestrian tracking
CN111539436A (en) Rail fastener positioning method based on straight template matching
CN102622582B (en) Road pedestrian event detection method based on video
Wang et al. An inverse projective mapping-based approach for robust rail track extraction
CN104537690A (en) Moving point target detection method based on maximum value-time index combination

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211228

Address after: 710000 room d4405, 4th floor, free trade bonded building, No. 1, free trade Avenue, airport new town, Xixian new area, Xi'an, Shaanxi a44

Patentee after: Xi'an Dewei Shitong Intelligent Transportation Co.,Ltd.

Address before: 710064 middle section of south 2nd Ring Road, Xi'an, Shaanxi

Patentee before: CHANG'AN University

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: 710000 room d4405, 4th floor, free trade bonded building, No. 1, free trade Avenue, airport new town, Xixian new area, Xi'an, Shaanxi a44

Patentee after: Xi'an Dewei Shitong Intelligent Technology Co.,Ltd.

Address before: 710000 room d4405, 4th floor, free trade bonded building, No. 1, free trade Avenue, airport new town, Xixian new area, Xi'an, Shaanxi a44

Patentee before: Xi'an Dewei Shitong Intelligent Transportation Co.,Ltd.

CP01 Change in the name or title of a patent holder