CN103929685B - A kind of video frequency abstract generation and indexing means - Google Patents

A kind of video frequency abstract generation and indexing means Download PDF

Info

Publication number
CN103929685B
CN103929685B CN201410151449.8A CN201410151449A CN103929685B CN 103929685 B CN103929685 B CN 103929685B CN 201410151449 A CN201410151449 A CN 201410151449A CN 103929685 B CN103929685 B CN 103929685B
Authority
CN
China
Prior art keywords
target
video
background
sequence
frequency abstract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410151449.8A
Other languages
Chinese (zh)
Other versions
CN103929685A (en
Inventor
沈志忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huarong Technology Co Ltd
Original Assignee
CHINA HUA RONG HOLDINGS Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA HUA RONG HOLDINGS Corp Ltd filed Critical CHINA HUA RONG HOLDINGS Corp Ltd
Priority to CN201410151449.8A priority Critical patent/CN103929685B/en
Publication of CN103929685A publication Critical patent/CN103929685A/en
Application granted granted Critical
Publication of CN103929685B publication Critical patent/CN103929685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of generation of video frequency abstract and indexing means, comprise the following steps:1)Background modeling is carried out to the target two field picture in original video, background extracting is realized;2)Current image and background model are compared segmentation, moving target is determined according to comparative result;3)The spatial information that interframe is distributed and the clarification of objective split per two field picture are matched, and realize target following, and record the movement locus of target;4)The position of target in the picture in Orders Corrected;5)Moving target is superimposed upon in background, brief video is formed, docket video data forms index data file.Beneficial effects of the present invention are:After motion target tracking, it is ensured that the success rate of tracking, the quality of the video frequency abstract of generation is greatly improved, video index work(disclosure satisfy that user quickly checks video, easily check that original video carries out complete viewing actual conditions.

Description

A kind of video frequency abstract generation and indexing means
Technical field
The present invention relates to technical field of video monitoring, more particularly to a kind of generation of video frequency abstract and indexing means.
Background technology
With multimedia technology, booming, video monitoring, image compression encoding and the Streaming Media skill of video capture technology Art is progressively developed, and the application of Video Supervision Technique in daily life is more and more extensive so that video monitoring not only limits to In safety precaution, but as a kind of to all effective supervision meanses of all trades and professions, the flexibility of its application field is also Through far beyond category defined in traditional safety monitoring.But be present data storage amount greatly in the video recording of video monitoring, deposit The features such as storage time is long, clue is found by recording a video, and obtains evidence, traditional way need to expend a large amount of human and material resources and Time, efficiency is extremely low, so that missing the best opportunity.Therefore in video monitoring system, original video is concentrated, Retrieval object can be locked with fast browsing, disclosure satisfy that public security, net prison and the various demands of criminal investigation and application.Video frequency abstract exists Key player is play in video analysis and content based video retrieval system, it generates a brief video, wherein containing All important activities in former video.Video by playing multiple events simultaneously, even different time occurs in former video , whole original video is compressed into a brief event summary.
The purpose of video frequency abstract is that user quickly checks video for convenience, and the quality of the video frequency abstract of generation is directly affected The experience effect of user.Generally existing target is imperfect in current video frequency abstract, ghost the problems such as.And the video after summary Upset the sequential logic of target in original video, if user is it is to be understood that the truth of some target, in addition it is also necessary to by Original video checks, therefore, how directly from summarized radio jump to original video in check the situation original case of the target It is also a problem for needing to solve.
The content of the invention
It is an object of the invention to provide a kind of generation of video frequency abstract and indexing means, to overcome currently available technology presence Above-mentioned deficiency.
The purpose of the present invention is to be achieved through the following technical solutions:
A kind of video frequency abstract generation and indexing means, comprise the following steps:
1)Background modeling:Background modeling is carried out to the target two field picture in original video, background extracting is realized, is regarded from original Separating background in frequency;
2)Moving target recognition:Current image and background model are compared segmentation, determined to transport according to comparative result Moving-target;
3)Motion target tracking:The spatial information that interframe is distributed and the clarification of objective split per two field picture are carried out Matching, realizes target following, and record the movement locus of target;
4)Moving target position amendment:The goal set traced into is modified, mainly to target in sequence in image In position be modified;And
5)Summary synthesis and video index are set up:Moving target is superimposed upon in background, when will be different in original video Generation activity it is unobstructed in video frequency abstract or block it is less in the case of synchronously play, produce one over time and space Relative compact and the summarized radio for including required activity in original video, docket video data during synthetic video are formed Index data file.
Further, step 1)The method that middle background modeling uses color background model.
It is preferred that, the color background model specifically uses mixed Gaussian Background Algorithm.
Further, step 2)If there is the defect of cavity and noise jamming in the moving target after middle extraction, using shape State open and close operator is handled, and eliminates cavity and noise;Step 2)If there is same target quilt in the moving target after middle extraction Be divided into the defect of two or more targets, then it is mutual between calculating target to all targets extracted in each frame Space length, is less than threshold value Λ target identification into same target by distance.
Further, step 3)Specifically include following steps:
a)The distributed intelligence of tracking module utilization space and color characteristic carry out matched jamming between the moving target of consecutive frame, What the match is successful is considered as same target, and records movement locus;Matching is unsuccessful to be considered as a new moving target.
b)The result of tracking is stored in setIn,Middle object representation mode is as follows:
Wherein,Represent targetThe sequence occurred in video.
Further, the method for the matched jamming includes following two:
The first:Target in target and set omega that a new frame is split is matched, and is defined with minor function:
Time difference function:
Wherein,A target newly extracted is represented,Represent setIn a target.RepresentWhen Between stab,RepresentTimestamp.For the time difference threshold value of definition.
Distance difference function:
Wherein,A target newly extracted is represented,Represent setIn a target.Table ShowWithDistance spatially.For the range difference threshold value of definition.
Comparison function:
If comparison functionFor 1, then calculateWithColor histogram map distance, meet Histogram distance threshold value The match is successful, willIt is added toSequence in.If matching it is unsuccessful orFor 0, thenIt is a new target, willIt is added to setIn.
Second:By first method first by target beta andA newest frame for target sequenceIt is compared, if matching It is unsuccessful, then andFormer frame be compared, untilPreceding M frames.
Further, step 4)In moving target position be modified specifically include following steps:
The first step, after the completion of video all processing, statisticsIn each targetSequence in target width, Height simultaneously sorts.
After sequenceWidth means it is as follows:
After sequenceHeight be expressed as follows:
Second step, calculates the average value of above sequence, obtains target respectivelyWidthAnd height, according toWith Each target location in target sequence is modified.
Further, step 5)During middle summary synthesis, the moving target for participating in merging in each frame of video frequency abstract need to be recorded Coding, position and the timestamp occurred first, by these values keep indexed file in.
Beneficial effects of the present invention are:After motion target tracking, it is ensured that the success rate of tracking, regarding for generation is greatly improved The quality of frequency summary, video index work(disclosure satisfy that user quickly checks video, easily check that original video is completely seen See actual conditions.
Brief description of the drawings
The present invention is described in further detail below according to accompanying drawing.
Fig. 1 be described in the embodiment of the present invention a kind of video frequency abstract generation and indexing means schematic flow sheet;
Fig. 2 is the position of a moving target sequence in the picture before the amendment described in the embodiment of the present invention;
Fig. 3 is the position of a revised moving target sequence in the picture described in the embodiment of the present invention.
Embodiment
As shown in figure 1, the step of embodiment of the present invention is by background modeling, moving target recognition, motion target tracking, motion Target amendment, summary synthesis, video index composition.It is comprised the following steps that:
1st, background modeling
Background modeling module can use various image background modeling algorithms, including color background model and grain background mould The class of type two.Its thought of color background model is the color value to each pixel in image(Gray scale or colour)It is modeled.If When pixel color value in pixel color value and background model on present image coordinate (x, y) on (x, y) has larger difference, when Preceding pixel is considered as prospect, is otherwise background.
The present embodiment background modeling module uses the mixed Gaussian Background Algorithm in color background model, mixed Gaussian background Model(Gaussian Mixture Model)It is to be developed on the basis of single Gauss model, it is close by multiple gaussian probabilities The weighted average of degree function carrys out the density fonction of smoothly approximate arbitrary shape.Mixed Gauss model is assumed to be used for describing every The Gaussian Profile of the color of individual pixel is K, typically takes 3 ~ 5.The present embodiment K values are 3.
2nd, moving target recognition
After Background Modeling, current image and background model are carried out certain and compared, need are determined according to comparative result The moving target to be detected.Generally, the prospect obtained contains many noises, in order to eliminate noise, the present embodiment pair The movement destination image of extraction has carried out opening operation and closed operation, and smaller profile is then abandoned again.
The present embodiment is after target is extracted, the pixel sum that it is included to each object statistics, if a certain mesh Mark pixel sum and be less than 400 pixels, the target is considered as ELIMINATION OF ITS INTERFERENCE and fallen, do not processed by the present embodiment.
The problem of in order to solve same Target Segmentation into two and above target, calculate in present frame between all targets Mutual space length, in units of pixel, will apart from less thanTarget identification into same target.Λ values in the present embodiment For 15 pixels.
3rd, motion target tracking module
To some moving target of present frame, because inter frame temporal interval is very short, space size shared by moving target and Residing spatial position change is smaller, and the distributed intelligence of the present embodiment utilization space and color characteristic are between the moving target of consecutive frame Carry out matched jamming.
The distributed intelligence of tracking module utilization space and color characteristic carry out matched jamming between the moving target of consecutive frame. With being successfully considered as same target, and movement locus is recorded, match and unsuccessful is considered as a new moving target.
The result of tracking is stored in setIn,Middle object representation mode is as follows:
Wherein,Represent targetThe sequence occurred in video.
If a certain frame Objective extraction effect of extraction module is bad, tracking can be caused to fail.In order to improve the success of tracking Rate, using following two method:
1)Target of the target that a new frame is split not only with previous frame is matched, but and setIn mesh Mark is matched, and is defined with minor function:
Time difference function:
Wherein,A target newly extracted is represented,Represent setIn a target.RepresentWhen Between stab,RepresentTimestamp.For the time difference threshold value of definition.
Distance difference function:
Wherein,A target newly extracted is represented,Represent setIn a target.Table ShowWithDistance spatially.For the range difference threshold value of definition.
Comparison function:
If comparison functionFor 1, then calculateWithColor histogram map distance, meet Histogram distance threshold value The match is successful, willIt is added toSequence in.If matching it is unsuccessful orFor 0, thenIt is a new target, willIt is added to setIn.
2)In above method, only by target andA newest frame for target sequence is compared, ifLast frame Extract bad, it may appear that the situation of tracking failure.First by target beta andA newest frame for target sequenceIt is compared, if With unsuccessful, then andFormer frame be compared, untilPreceding M frames.
In the present embodiment, the frame number appeared in using target in video flowing sequence is used as its timestamp, the first frame number For 0, increase successively.The present embodiment time difference functionValue is 15, represents target to be matchedAnd setMiddle target Timestamp difference should be within 15 frames.
In the present embodiment, target to be matched is calculatedAnd setMiddle targetThe distance between when, with two targets it Between pixel value between closest approach be used as both distances, distance difference functionValue is 20, is represented to be matched TargetAnd setMiddle targetDistance difference should be in 20 pixels.
M values are 10 in the present embodiment tracking module, represent target to be matchedCan and it gatherMiddle targetSequence Last 10 targets are compared, and are carried out when comparing according to the inverted order of target time of occurrence.
The present embodiment statistics obtains target to be matchedAnd setInColor histogram, calculate the two histogrammic Bhattacharyya distances, to describe two histogrammic similitudes.If Bhattacharyya distances are less than 0.6, say It is brightAnd setInThe match is successful, willIt is added toSequence in.IfWithIn all targets can not all match, then GiveOne target code, willIt is added to setIn.
4th, moving target position correcting module
Moving target position correcting module is after the completion of video all processing, statisticsIn each targetSequence The width of middle target, height, to target in ΩThe target location of sequence is modified, rightWidth and height are from big in sequence It is ranked up to small, after sequenceWidth means it is as follows:
After sequenceHeight be expressed as follows:
Top n width and the average value of height after sequence are calculated, mean breadth and average height is drawn.Here N values areThe 20% of sequence sum.The principle alignd during amendment according to target's center, symmetrical modification target width is symmetrical above and below to repair Change object height.
It is the target after amendment as shown in Figure 3 as shown in Fig. 2 being the position of the previous target sequence of amendment in the picture The position of sequence.After moving target position amendment, the incomplete problem of the Objective extraction having in extraction process can be improved, Improve the quality of the video frequency abstract of generation.
5th, summary synthesis and video index
The module mainly completes the moving target traced into be synthesized with video background, is sent out when will be different in original video Raw activity is unobstructed in video frequency abstract(Or block smaller)In the case of synchronously play, produce one over time and space Relative compact and the summarized radio for including required activity in original video.
For each two field picture of video frequency abstract, which moving target is selected while occurring being the key synthesized.This implementation Example is determined by calculating the energy damage threshold of each moving target.The function is by moving target time difference loss function and fortune Moving-target collision loss function is constituted, and the selection qualified moving target of energy damage threshold value is merged.
Produce before each frame video frequency abstract, moving target is divided into three classes set:Completion is merged(S1), merge (S2), it is to be combined(S3).According to the sequencing of time of occurrence from S3, the energy loss letter between set S2 is calculated successively Number, meets and occurs in same frame video just merging for loss threshold value.
Need to provide background image during merging, choose the background at moving target time of occurrence earliest moment in the frame as the back of the body Scape image.
When merging moving target, record the coding of moving target for participating in merging in each frame, position, occur first when Between stab, by these values preserve indexed file in.
When user clicks on video, judge whether mouse position falls in the range of the envelope of moving target, if mouse position Put in a certain target zone, search index file obtains the time that the target occurs in original video.
The present invention is not limited to above-mentioned preferred forms, and anyone can show that other are various under the enlightenment of the present invention The product of form, however, make any change in its shape or structure, it is every that there is skill identical or similar to the present application Art scheme, is within the scope of the present invention.

Claims (6)

1. a kind of video frequency abstract generation and indexing means, it is characterised in that comprise the following steps:
1)Background modeling:Background modeling is carried out to the target two field picture in original video, background extracting is realized, from original video Separating background;
2)Moving target recognition:Current image and background model are compared segmentation, motion mesh is determined according to comparative result Mark;
3)Motion target tracking:The spatial information that interframe is distributed and the clarification of objective progress split per two field picture Match somebody with somebody, realize target following, and record the movement locus of target;
4)Moving target position amendment:The goal set traced into is modified, mainly to target in sequence in the picture Position is modified;And
5)Summary synthesis and video index are set up:Moving target is superimposed upon in background, occurred when will be different in original video Activity it is unobstructed in video frequency abstract or block it is less in the case of synchronously play, produce one it is relative over time and space Summarized radio that is compact and including required activity in original video, docket video data during synthetic video forms index Data file;
Step 3)Specifically include following steps:
a)The distributed intelligence of tracking module utilization space and color characteristic carry out matched jamming between the moving target of consecutive frame, matching Successfully it is considered as same target, and records movement locus;Matching is unsuccessful to be considered as a new moving target;And
b)The result of tracking is stored in setIn,Middle object representation mode is as follows:
Wherein,Represent targetThe sequence occurred in video;
The method of the matched jamming includes following two:
The first:Target in target and set omega that a new frame is split is matched, and is defined with minor function:
Time difference function:
Wherein,A target newly extracted is represented,Represent setIn a target;RepresentTimestamp,RepresentTimestamp;For the time difference threshold value of definition;
Distance difference function:
Wherein,A target newly extracted is represented,Represent setIn a target,Represent WithDistance spatially;For the range difference threshold value of definition;
Comparison function:
If comparison functionFor 1, then calculateWithColor histogram map distance, meet of Histogram distance threshold value , will with successIt is added toSequence in;If matching it is unsuccessful orFor 0, thenIt is a new target, willAdd It is added to setIn;
Second:By first method first by target beta andA newest frame for target sequenceBe compared, if matching not into Work(, then andFormer frame be compared, untilPreceding M frames.
2. a kind of video frequency abstract generation according to claim 1 and indexing means, it is characterised in that step 1)In, background The method that modeling uses color background model.
3. a kind of video frequency abstract generation according to claim 2 and indexing means, it is characterised in that:The color background mould Type specifically uses mixed Gaussian Background Algorithm.
4. a kind of video frequency abstract generation according to claim 1 and indexing means, it is characterised in that step 2)In, extract If the defect of cavity and noise jamming occurs in moving target afterwards, handled using morphology open and close operator, eliminate cavity And noise;Step 2)If same target, which occurs, in the moving target after middle extraction is divided into lacking for two or more targets Fall into, then to all targets extracted in each frame, calculate the mutual space length between target, distance is less than to threshold value Λ mesh Mark is identified as same target.
5. a kind of video frequency abstract generation according to claim 1 and indexing means, it is characterised in that step 4)In to motion Target location, which is modified, specifically includes following steps:
The first step, after the completion of video all processing, statisticsIn each targetSequence in the width of target, height And sort;
After sequenceWidth means it is as follows:
After sequenceHeight be expressed as follows:
Second step, calculates the average value of above sequence, obtains target respectivelyWidthAnd height, according toWithTo mesh Each target location in mark sequence is modified.
6. a kind of video frequency abstract generation according to claim 1 and indexing means, it is characterised in that:Step 5)In, summary During synthesis, coding, position and the timestamp occurred first for the moving target for participating in merging in each frame of video frequency abstract need to be recorded, These values are kept in indexed file.
CN201410151449.8A 2014-04-15 2014-04-15 A kind of video frequency abstract generation and indexing means Active CN103929685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410151449.8A CN103929685B (en) 2014-04-15 2014-04-15 A kind of video frequency abstract generation and indexing means

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410151449.8A CN103929685B (en) 2014-04-15 2014-04-15 A kind of video frequency abstract generation and indexing means

Publications (2)

Publication Number Publication Date
CN103929685A CN103929685A (en) 2014-07-16
CN103929685B true CN103929685B (en) 2017-11-07

Family

ID=51147740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410151449.8A Active CN103929685B (en) 2014-04-15 2014-04-15 A kind of video frequency abstract generation and indexing means

Country Status (1)

Country Link
CN (1) CN103929685B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9639762B2 (en) * 2014-09-04 2017-05-02 Intel Corporation Real time video summarization
CN104268563B (en) * 2014-09-15 2017-05-17 合肥工业大学 Video abstraction method based on abnormal behavior detection
CN104717573B (en) * 2015-03-05 2018-04-13 广州市维安电子技术有限公司 A kind of generation method of video frequency abstract
CN104935888B (en) * 2015-06-11 2019-01-04 惠州Tcl移动通信有限公司 It is a kind of can tagged object video monitoring method and its video monitoring system
CN105007464A (en) * 2015-07-20 2015-10-28 江西洪都航空工业集团有限责任公司 Method for concentrating video
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
CN107493520A (en) * 2016-06-13 2017-12-19 合肥君正科技有限公司 A kind of video abstraction generating method and device
CN106714007A (en) * 2016-12-15 2017-05-24 重庆凯泽科技股份有限公司 Video abstract method and apparatus
CN109511019A (en) * 2017-09-14 2019-03-22 中兴通讯股份有限公司 A kind of video summarization method, terminal and computer readable storage medium
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN109102530B (en) 2018-08-21 2020-09-04 北京字节跳动网络技术有限公司 Motion trail drawing method, device, equipment and storage medium
CN110532916B (en) * 2019-08-20 2022-11-04 北京地平线机器人技术研发有限公司 Motion trail determination method and device
CN111539974B (en) * 2020-04-07 2022-11-11 北京明略软件系统有限公司 Method and device for determining track, computer storage medium and terminal
CN111918146B (en) * 2020-07-28 2021-06-01 广州筷子信息科技有限公司 Video synthesis method and system
CN111739128B (en) * 2020-07-29 2021-08-31 广州筷子信息科技有限公司 Target video generation method and system
CN111563489A (en) * 2020-07-14 2020-08-21 浙江大华技术股份有限公司 Target tracking method and device and computer storage medium
CN113873200B (en) * 2021-09-26 2024-02-02 珠海研果科技有限公司 Image identification method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004047109A1 (en) * 2002-11-19 2004-06-03 Koninklijke Philips Electronics N.V. Video abstracting
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202003350U (en) * 2010-12-14 2011-10-05 广东鑫程电子科技有限公司 Video summary system
JP2013192062A (en) * 2012-03-14 2013-09-26 Toshiba Corp Video distribution system, video distribution apparatus, video distribution method and program
CN102930061B (en) * 2012-11-28 2016-01-06 安徽水天信息科技有限公司 A kind of video summarization method based on moving object detection
CN103092929B (en) * 2012-12-30 2016-12-28 信帧电子技术(北京)有限公司 A kind of generation method and device of video frequency abstract
CN103092963A (en) * 2013-01-21 2013-05-08 信帧电子技术(北京)有限公司 Video abstract generating method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004047109A1 (en) * 2002-11-19 2004-06-03 Koninklijke Philips Electronics N.V. Video abstracting
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene

Also Published As

Publication number Publication date
CN103929685A (en) 2014-07-16

Similar Documents

Publication Publication Date Title
CN103929685B (en) A kind of video frequency abstract generation and indexing means
US20220114735A1 (en) Trajectory cluster model for learning trajectory patterns in video data
US11233976B2 (en) Anomalous stationary object detection and reporting
CN104063883B (en) A kind of monitor video abstraction generating method being combined based on object and key frame
Kwon et al. Tracking by sampling trackers
EP2780871B1 (en) Tracklet-based multi-commodity network flow for tracking multiple people
Liu et al. Pose-guided R-CNN for jersey number recognition in sports
US8416296B2 (en) Mapper component for multiple art networks in a video analysis system
CN107220585A (en) A kind of video key frame extracting method based on multiple features fusion clustering shots
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN107273835A (en) Act of violence intelligent detecting method based on video analysis
CN202003350U (en) Video summary system
US20160029031A1 (en) Method for compressing a video and a system thereof
CN107659754B (en) Effective concentration method for monitoring video under condition of tree leaf disturbance
CN110688940A (en) Rapid face tracking method based on face detection
Concha et al. Multi-stream convolutional neural networks for action recognition in video sequences based on adaptive visual rhythms
Komorowski et al. Deepball: Deep neural-network ball detector
CN108629301B (en) Human body action recognition method
Hari et al. Event detection in cricket videos using intensity projection profile of Umpire gestures
Hung et al. Event detection of broadcast baseball videos
CN110414430B (en) Pedestrian re-identification method and device based on multi-proportion fusion
CN108830882A (en) Video abnormal behaviour real-time detection method
Casagrande et al. Abnormal motion analysis for tracking-based approaches using region-based method with mobile grid
CN112446417A (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN104618745B (en) A kind of device of product placement dynamic in video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 100088 floor 15, block A, Haidian District urban construction, Beijing.

Patentee after: China Huarong Technology Group Limited

Address before: 100088 Haidian District, Beijing, North Taiping Road 18, city building A block 15.

Patentee before: CHINA HUA RONG HOLDINGS CORPORATION LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210310

Address after: Floor 15, block a, 18 Beitaipingzhuang Road, Haidian District, Beijing

Patentee after: HUARONG TECHNOLOGY Co.,Ltd.

Address before: 100088 floor 15, block A, Haidian District urban construction, Beijing.

Patentee before: CHINA HUARONG TECHNOLOGY GROUP Co.,Ltd.