CN101600095B - Video monitoring method and video monitoring system - Google Patents

Video monitoring method and video monitoring system Download PDF

Info

Publication number
CN101600095B
CN101600095B CN 200910040774 CN200910040774A CN101600095B CN 101600095 B CN101600095 B CN 101600095B CN 200910040774 CN200910040774 CN 200910040774 CN 200910040774 A CN200910040774 A CN 200910040774A CN 101600095 B CN101600095 B CN 101600095B
Authority
CN
China
Prior art keywords
video data
data
video
camera
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910040774
Other languages
Chinese (zh)
Other versions
CN101600095A (en
Inventor
谢佳亮
张丛喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN 200910040774 priority Critical patent/CN101600095B/en
Publication of CN101600095A publication Critical patent/CN101600095A/en
Application granted granted Critical
Publication of CN101600095B publication Critical patent/CN101600095B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a video monitoring method and a video monitoring system. The video monitoring method comprises the following steps: collecting first video data by a first camera and collecting second video data by a second camera; when the first video data are overlapped with the second video data, extracting first overlapped data and second overlapped data; converting the first overlapped data into a first grayscale and converting the second overlapped data into a second grayscale; utilizing a block matching function to calculate the matching position information of the pixels of the first grayscale in a first presumptive area and the pixels of the second grayscale in a second presumptive area by a per pixel comparative method; matching the first video data and the second video data according to the position information. The invention causes the video data output by the first camera and the second camera to be relatively integral and clear and has no overlapped video data.

Description

A kind of video frequency monitoring method and video monitoring system
Technical field
The present invention relates to field of computer technology, relate in particular to a kind of video frequency monitoring method and video monitoring system.
Background technology
In the prior art, it is to carry out video monitoring through a camera is installed that certain zone is monitored; In the time of will carrying out video monitoring to the place in big zone; Then a plurality of cameras need be installed; And the video data of a plurality of camera outputs possibly have lap; Cause the video data of lap fuzzyyer, influenced the integrality and the definition of output video data, watch very inconvenient like this.
Summary of the invention
The invention provides a kind of video frequency monitoring method and video monitoring system, improved the integrality and the definition of the video data of a plurality of camera outputs.
Technical scheme of the present invention is: a kind of video frequency monitoring method, and it comprises step:
Step 1, through first camera collection, first video data, through second camera collection, second video data;
After step 1, also comprise the step of said first video data or second video data being carried out video correction:
Said first video data of preliminary election or second video data as benchmark image, obtain the angle point of said benchmark image in the data of predetermined instant;
Obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through the zone coupling;
Calculate the motion excursion amount of said match point according to affine model with respect to said angle point; According to said motion excursion amount said first video data or second video data are carried out motion compensation;
Detect the light or the side-play amount of said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When light or side-play amount etc. satisfies predefined light or offset value, be benchmark image with the Data Update behind the said predetermined instant;
Whether step 2, the setting angle through first camera and second camera detect the video data of said first camera and the output of second camera overlapping; After judging the overlapping angle of first video data and second video data, according to said overlapping said first video data of angle extraction in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region;
Step 3, convert said first overlapped data into first gray-scale map, convert said second overlapped data into second gray-scale map;
Step 4, to said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary;
Step 5, go out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map, said first video data and second video data are spliced according to said vertical and horizontal offset according to said positional information calculation.
The present invention has also disclosed a kind of video monitoring system, and it comprises:
First camera is used to gather first video data;
Second camera is used to gather second video data;
The video correction module, be used for said first video data of preliminary election or second video data in the data of predetermined instant as benchmark image, obtain the angle point of said benchmark image;
Obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through the zone coupling;
Calculate the motion excursion amount of said match point according to affine model with respect to said angle point; According to said motion excursion amount said first video data or second video data are carried out motion compensation;
Detection module is used for
Detect the light or the side-play amount of said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When light or side-play amount etc. satisfies predefined light or offset value, be benchmark image with the Data Update behind the said predetermined instant;
Extraction module; Whether being used for setting angle through first camera and second camera, to detect the video data of said first camera and the output of second camera overlapping; Judge first video data and the overlapping angle of second video data, and according to said overlapping said first video data of angle extraction in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region;
Modular converter is used for converting said first overlapped data into first gray-scale map, converts said second overlapped data into second gray-scale map;
Computing module; Be used for said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary;
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map according to said positional information calculation, according to said vertical and horizontal offset said first video data and second video data is spliced.
Video frequency monitoring method of the present invention and video monitoring system; Said video frequency monitoring method can be when said first video data and second video data be overlapping; Calculate the vertical and horizontal offset of relative second video data of first video data; According to said vertical and horizontal offset said first video data and second video data are spliced processing; Make the video data comparison complete sum of win camera and the output of second camera clear, the overlapping video data can not occur, made things convenient for the user to check; Owing to the wide coverage of a plurality of cameras, then can realize in addition large-scale video monitoring.
Description of drawings
Fig. 1 is a video frequency monitoring method of the present invention flow chart in one embodiment;
Fig. 2 is a video frequency monitoring method of the present invention flow chart in another embodiment;
Fig. 3 is a video monitoring system of the present invention structured flowchart in one embodiment;
Fig. 4 is a video monitoring system of the present invention structured flowchart in another embodiment.
Embodiment
Video frequency monitoring method of the present invention and video monitoring system; Said video frequency monitoring method can be when said first video data and second video data be overlapping; Calculate the vertical and horizontal offset of relative second video data of first video data; According to said vertical and horizontal offset said first video data and second video data are spliced processing; Make the video data comparison complete sum of win camera and the output of second camera clear, the overlapping video data can not occur, made things convenient for the user to check; Owing to the wide coverage of a plurality of cameras, then can realize in addition large-scale video monitoring.
Below in conjunction with accompanying drawing specific embodiment of the present invention is done a detailed elaboration.
Video frequency monitoring method of the present invention like Fig. 1, comprises step:
S101, through first camera collection, first video data, through second camera collection, second video data; The angle that first camera and second camera are installed can be installed as required, enlarges the scope of video monitoring through two cameras are installed.
S102, when said first video data and second video data are overlapping, then extract said first video data in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region; In a preferred embodiment; Before when said first video data and second video data are overlapping; Can also comprise step: it is overlapping whether the video data that the setting angle through said first camera and second camera detects said first camera and the output of second camera has, and also can judge whether the video data of said first camera and the output of second camera has overlapping through other method.After having judged the overlapping angle of first video data and second video data, can extract said first video data in first overlapped data of overlapping region and said second video data second overlapped data according to said overlapping angle in the overlapping region.
S103, convert said first overlapped data into first gray-scale map, convert said second overlapped data into second gray-scale map.
S104, to said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary.
First presumptive area can be a part of data of first gray-scale map; It also can be the total data of first gray-scale map; Second presumptive area can be a part of data of second gray-scale map; It also can be the total data of second gray-scale map; When first presumptive area be the total data of first gray-scale map, when second presumptive area is the total data of second gray-scale map, the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary is the most accurate.This piece adaptation function can be the average of calculating pixel difference absolute value in a preferred embodiment, can certainly adopt other piece adaptation function to calculate.Adopt by pixel method relatively; The positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary; Specifically can be: get each pixel of first presumptive area and each pixel of second presumptive area respectively; Calculate the average of pixel difference absolute value of each pixel of each pixel and second presumptive area of first presumptive area; Get the minimum value of the average of pixel difference absolute value, certain locations of pixels information of certain the locations of pixels information of first presumptive area and second presumptive area when average of recording pixel difference absolute value is got minimum value.
S105, go out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map, said first video data and second video data are spliced according to said vertical and horizontal offset according to said positional information calculation.Can know the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map according to said positional information, thereby can splice first video data and second video data according to vertical and horizontal offset.
Need to prove; In practical application, can also gather video data through more than two cameras, design according to actual needs; The video monitoring range of camera is wider like this, only shows first camera and second camera in the above-described embodiments.
In order to prevent that first video data or second video data from shake occurring or deformity occurring; In a preferred embodiment; Said first video data or second video data are through the data behind the video correction; Like Fig. 2, can between step S101 and step S102, also comprise said video correction process S1011, said first video data or second video data are carried out video correction, detailed process comprises as follows:
Step 1, said first video data of preliminary election or second video data as benchmark image, obtain the angle point of said benchmark image in the data of predetermined instant; This angle point promptly is the characteristic point of said benchmark image;
Step 2, obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through zone coupling;
Step 3, calculate the motion excursion amount of said match point with respect to said angle point according to affine model; According to said motion excursion amount said first video data or second video data are carried out motion compensation.
In a preferred embodiment, said affine model is four parameter affine models.
In a preferred embodiment; In said step S1011; Can also comprise step: the distinguishing characteristics that detects said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When said distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the said predetermined instant.This distinguishing characteristics can be characteristics such as light or side-play amount, and predetermined condition is predefined, is that predefined light or side-play amount are equivalent, when light or side-play amount etc. satisfies predefined value, just needs to upgrade benchmark image.Through bringing in constant renewal in benchmark image, can improve the effect of video correction, prevent that small shake from appearring in video data.
Below in conjunction with specific embodiment the method for above-mentioned first video data and second video data splicing is done a detailed elaboration:
Consider that first video data and second video data need do the splicing on level and the vertical direction, taked by pixel way relatively.If first video data is I, second video data is J, and its size is respectively Wi, hi and Wj, hj, and hi=hj is then arranged, and Wi and Wj represent the width of first video data and second video data, and hi and hj represent the height of first video data and second video data.
1. getting I the right and J left side size does
Figure GSB00000819026700071
The zone be the overlapping region, changing first video data and second video data data in the overlapping region respectively is gray-scale map I rAnd J l
2. establish w for comparing width, h is than height,
For?h=(-□h,+□h)
Forw = ( 1 , min ( w i 3 , w j 3 ) )
If?h<0
Calculate D [I r(1~w ,-h~h i), J l(1~w, 1~h+h j)];
Else
Calculate D [I r(1~w, 1~h i-h), J l(1~w, h~h j)];
Get the coordinate information of the corresponding pixel of minimum D value; Calculate the vertical and horizontal offset of relative second video data of first video data according to the coordinate information of the pixel of correspondence, first video data and second video data are spliced according to said vertical and horizontal offset.
Wherein (I J) is the piece adaptation function to D, is defined as the average of pixel difference absolute value here.I (x 1~x 2, y 1~y 2) represent to get the subimage of image I, its coordinate range is x ∈ [x 1, x 2], y ∈ [y 1, y 2].The video data coordinate is since 1.Δ h is the hunting zone on the vertical direction of being scheduled to, and in realization, can get h/20.
The present invention has also disclosed a kind of video monitoring system, like Fig. 2, comprising: first camera, second camera, extraction module, modular converter, computing module and concatenation module; The output of said first camera and second camera is connected with the input of extraction module respectively, and the output of extraction module is connected with concatenation module through modular converter, computing module successively;
First camera is used to gather first video data;
Second camera is used to gather second video data; The angle that first camera and second camera are installed can be installed as required, can enlarge the scope of video monitoring through two cameras are installed;
Extraction module when said first video data and second video data are overlapping, is used to extract said first video data in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region; In a preferred embodiment; Said extraction module also be used for according to the angle that said first camera and second camera are installed detect said first video data and second video data whether overlapping, also can detect said first video data through other method and second video data whether overlapping; After having detected overlapping angle, can extract said first video data in first overlapped data of overlapping region and said second video data second overlapped data according to said overlapping angle in the overlapping region;
Modular converter is used for converting said first overlapped data into first gray-scale map, converts said second overlapped data into second gray-scale map;
Computing module; Be used for said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary; In a preferred embodiment, said adaptation function is the average that is used for calculating pixel difference absolute value, also can adopt other piece adaptation function to calculate; The positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary; Specifically can for: calculate the average of absolute value of difference of pixel of pixel and second presumptive area of said first presumptive area, the locations of pixels information of locations of pixels information and said second presumptive area of getting pairing said first presumptive area of minimum mean is as the positional information that is complementary.
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map according to said positional information calculation, according to said vertical and horizontal offset said first video data and second video data is spliced.
In a preferred embodiment, like Fig. 4, video monitoring system of the present invention also comprises the video correction module, is connected between the input of output and said extraction module of said first camera and second camera, is used for:
Said first video data of preliminary election or second video data as benchmark image, obtain the angle point of said benchmark image in the data of predetermined instant;
Obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through the zone coupling;
Calculate the motion excursion amount of said match point according to affine model with respect to said angle point; According to said motion excursion amount said first video data or second video data are carried out motion compensation.
So just can carry out video correction, prevent that the phenomenon of shake or deformity from appearring in first video data or second video data first video data or second video data.
In a preferred embodiment, said affine model is four parameter affine models.
In a preferred embodiment; Video monitoring system of the present invention also comprises the detection module that is connected with said video correction module; It is used to detect the distinguishing characteristics of said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When said distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the said predetermined instant.This distinguishing characteristics can be characteristics such as light or side-play amount, and predetermined condition is predefined, is the sizes values of predefined light or side-play amount etc., when light or side-play amount etc. satisfies predefined sizes values, just needs to upgrade benchmark image.Through bringing in constant renewal in benchmark image, can improve the effect of video correction, prevent that small shake from appearring in video data.
Above-described embodiment of the present invention does not constitute the qualification to protection range of the present invention.Any modification of within spirit of the present invention and principle, being done, be equal to replacement and improvement etc., all should be included within the claim protection range of the present invention.

Claims (5)

1. a video frequency monitoring method is characterized in that, comprises step:
Step 1, through first camera collection, first video data, through second camera collection, second video data;
After step 1, also comprise the step of said first video data or second video data being carried out video correction:
Said first video data of preliminary election or second video data as benchmark image, obtain the angle point of said benchmark image in the data of predetermined instant;
Obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through the zone coupling;
Calculate the motion excursion amount of said match point according to affine model with respect to said angle point; According to said motion excursion amount said first video data or second video data are carried out motion compensation;
Detect the light or the side-play amount of said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When light or side-play amount etc. satisfies predefined light or offset value, be benchmark image with the Data Update behind the said predetermined instant;
Whether step 2, the setting angle through first camera and second camera detect the video data of said first camera and the output of second camera overlapping; After judging the overlapping angle of first video data and second video data, according to said overlapping said first video data of angle extraction in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region;
Step 3, convert said first overlapped data into first gray-scale map, convert said second overlapped data into second gray-scale map;
Step 4, to said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary;
Step 5, go out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map, said first video data and second video data are spliced according to said vertical and horizontal offset according to said positional information calculation.
2. video frequency monitoring method according to claim 1; It is characterized in that: the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary; Be specially: calculate the average of absolute value of difference of pixel of pixel and second presumptive area of said first presumptive area, the locations of pixels information of locations of pixels information and said second presumptive area of getting pairing said first presumptive area of minimum mean is as the positional information that is complementary.
3. video frequency monitoring method according to claim 1 is characterized in that: said affine model is four parameter affine models.
4. a video monitoring system is characterized in that, comprising:
First camera is used to gather first video data;
Second camera is used to gather second video data;
The video correction module, be used for said first video data of preliminary election or second video data in the data of predetermined instant as benchmark image, obtain the angle point of said benchmark image;
Obtain the match point of said first video data or the said relatively angle point of the data of second video data behind said predetermined instant through the zone coupling;
Calculate the motion excursion amount of said match point according to affine model with respect to said angle point; According to said motion excursion amount said first video data or second video data are carried out motion compensation;
Detection module is used for
Detect the light or the side-play amount of said first video data or the said relatively benchmark image of the data of second video data behind said predetermined instant; When light or side-play amount etc. satisfies predefined light or offset value, be benchmark image with the Data Update behind the said predetermined instant;
Extraction module; Whether being used for setting angle through first camera and second camera, to detect the video data of said first camera and the output of second camera overlapping; Judge first video data and the overlapping angle of second video data, and according to said overlapping said first video data of angle extraction in first overlapped data of overlapping region and said second video data second overlapped data in the overlapping region;
Modular converter is used for converting said first overlapped data into first gray-scale map, converts said second overlapped data into second gray-scale map;
Computing module; Be used for said first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area; Adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate pixel and said second presumptive area of said first presumptive area is complementary;
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of said first gray-scale map according to said positional information calculation, according to said vertical and horizontal offset said first video data and second video data is spliced.
5. video monitoring system according to claim 4 is characterized in that: said affine model is four parameter affine models.
CN 200910040774 2009-07-02 2009-07-02 Video monitoring method and video monitoring system Expired - Fee Related CN101600095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910040774 CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910040774 CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Publications (2)

Publication Number Publication Date
CN101600095A CN101600095A (en) 2009-12-09
CN101600095B true CN101600095B (en) 2012-12-19

Family

ID=41421304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910040774 Expired - Fee Related CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Country Status (1)

Country Link
CN (1) CN101600095B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801966B (en) * 2012-08-29 2015-10-28 上海天跃科技股份有限公司 A kind of camera covering area overlapping algorithm and supervisory control system
CN104754292B (en) * 2013-12-31 2017-12-19 浙江大华技术股份有限公司 Vision signal compensates determination method for parameter and device used in processing
CN116437127B (en) * 2023-06-13 2023-08-11 典基网络科技(上海)有限公司 Video cartoon optimizing method based on user data sharing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567376A (en) * 2003-07-03 2005-01-19 马堃 On-site panoramic imagery method of digital imaging device
CN101093348A (en) * 2006-06-22 2007-12-26 三星电子株式会社 Apparatus and method for panoramic photography in portable terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567376A (en) * 2003-07-03 2005-01-19 马堃 On-site panoramic imagery method of digital imaging device
CN101093348A (en) * 2006-06-22 2007-12-26 三星电子株式会社 Apparatus and method for panoramic photography in portable terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尹德森等.基于角点跟踪的数字稳像算法.《红外与激光工程》.2008,第37卷(第5期),920-923. *

Also Published As

Publication number Publication date
CN101600095A (en) 2009-12-09

Similar Documents

Publication Publication Date Title
US10733705B2 (en) Information processing device, learning processing method, learning device, and object recognition device
KR102058001B1 (en) Traffic lane correction system, traffic lane correction apparatus and correction method
CN113252053B (en) High-precision map generation method and device and electronic equipment
CN101600095B (en) Video monitoring method and video monitoring system
US20150262343A1 (en) Image processing device and image processing method
CN102393901A (en) Traffic flow information perception method based on hybrid characteristic and system thereof
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
CN105590092A (en) Method and device for identifying pupil in image
JP6647171B2 (en) Information processing apparatus, information processing method, and program
CN101493739A (en) Splicing wall positioning system and splicing wall positioning method
CN105354813A (en) Method and device for driving pan-tilt-zoom (PTZ) to generate spliced image
CN103390259A (en) Ground image processing method in visual guidance AGV
CN104268884A (en) Lane departure early warning calibration system and method based on vehicle networking
CN113989516A (en) Smoke dynamic identification method and related device
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN104794680A (en) Multi-camera image mosaicking method and multi-camera image mosaicking device based on same satellite platform
CN110794397B (en) Target detection method and system based on camera and radar
CN101526848B (en) Coordinate judging system and method
CN110658929B (en) Control method and device for intelligent pen
KR102642691B1 (en) apparatus for recognizing measurement value and correcting distortion of instrument panel image and camera
JP2010224926A (en) Stop line detection device
CN102710978B (en) The cursor-moving method of television set and device
CN114445619A (en) Comprehensive pipe gallery risk identification method and system based on sound signal imaging
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN103837098A (en) Screen test device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Free format text: FORMER OWNER: XIE JIALIANG

Effective date: 20140925

Free format text: FORMER OWNER: ZHANG CONGZHE

Effective date: 20140925

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 510620 GUANGZHOU, GUANGDONG PROVINCE TO: 510670 GUANGZHOU, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20140925

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

Address before: 510620 science Avenue 182, Science City, Guangdong, Guangzhou, C2203

Patentee before: Xie Jialiang

Patentee before: Zhang Congzhe

ASS Succession or assignment of patent right

Owner name: XIE JIALIANG

Free format text: FORMER OWNER: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Effective date: 20141023

Owner name: ZHANG CONGZHE

Effective date: 20141023

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20141023

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Xie Jialiang

Patentee after: Zhang Congzhe

Address before: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee before: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

ASS Succession or assignment of patent right

Owner name: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Free format text: FORMER OWNER: XIE JIALIANG

Effective date: 20150226

Free format text: FORMER OWNER: ZHANG CONGZHE

Effective date: 20150226

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150226

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

Address before: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee before: Xie Jialiang

Patentee before: Zhang Congzhe

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121219

Termination date: 20210702

CF01 Termination of patent right due to non-payment of annual fee