CN107657625A - Merge the unsupervised methods of video segmentation that space-time multiple features represent - Google Patents

Merge the unsupervised methods of video segmentation that space-time multiple features represent Download PDF

Info

Publication number
CN107657625A
CN107657625A CN201710810535.9A CN201710810535A CN107657625A CN 107657625 A CN107657625 A CN 107657625A CN 201710810535 A CN201710810535 A CN 201710810535A CN 107657625 A CN107657625 A CN 107657625A
Authority
CN
China
Prior art keywords
pixel
super
segmentation
segmentation result
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710810535.9A
Other languages
Chinese (zh)
Inventor
张开华
李雪君
宋慧慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201710810535.9A priority Critical patent/CN107657625A/en
Publication of CN107657625A publication Critical patent/CN107657625A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses the unsupervised methods of video segmentation that fusion space-time multiple features represent, utilize target movable information, the difference of significant characteristics and color characteristic, clarification of objective extraction identification is carried out, and combine gauss hybrid models and realize and the stabilization of target is accurately split.This method includes super-pixel segmentation, light stream matches, Optimized Matching result, establish graph model and solve the segmentation result of super-pixel grade, gauss hybrid models parameter is trained using segmentation result, pixel class segmentation result is solved, with reference to existing super-pixel and pixel class segmentation result, obtains final segmentation result.Super-pixel segmentation is carried out to each two field picture and significantly reduces the complexity of computing, and the match information that light stream obtains is optimized using non local space time information, the robustness of segmentation can be improved.The introducing of mixed Gauss model compensate for the big deficiency of edge matching error during super-pixel segmentation, and significant characteristics then further increase the degree of accuracy and the confidence level of segmentation result.

Description

Merge the unsupervised methods of video segmentation that space-time multiple features represent
Technical field
The present invention relates to the unsupervised methods of video segmentation that fusion space-time multiple features represent, belong to computer vision field, More particularly to the Video segmentation field in image procossing.
Background technology
Video refers to a series of image sequence of continuous single image compositions, generally also includes the information such as word, voice. For the ease of transmitting and using, it usually needs video is split, the uninterested region of some users in video is rejected, And the data characteristics of object content is obtained so as to follow-up feature extraction and analysis.
Video segmentation is also referred to as motion segmentation, refers to by certain standard Segmentation of Image Sequences into multiple regions, its purpose It is to isolate significant entity from video sequence.In the image processing arts, the segmentation of image and video is very Important low layer treatment technology, it is almost the basis of all artificial intelligence technologys based on graphical analysis, and it is numerous height Layer application provides important data mode, such as:Vehicle identification, license plate identification, image/video retrieval, medical image analysis, base In the coding of object video, recognition of face, target detection tracking and identification etc..In all these applications, segmentation is typically to be Further image/video is analyzed, identified, the accuracy of segmentation directly affects the validity of follow-up work, therefore have Highly important meaning.
One of the problem of Video segmentation is always most difficult in computer vision and machine learning techniques.Generally speaking split Difficult point be the random motion and deformation of the target split, the complex background of Fast transforms, movable information is inaccurate and mesh Target is fuzzy etc., but goes for accurate information and need to utilize accurate segmentation result again, is thus absorbed in one and circulates it In.Up to now the scene of all complex transformations can also be applied to without a kind of general, reliable non-formaldehyde finishing algorithm, The Video Segmentation that current lot of domestic and foreign scholar is proposed is most of all for a certain specific application scenario or specific The image/video of species.Therefore in several years of future, Video segmentation problem will be still study hotspot in the urgent need to address.
Instantly most important Video segmentation mode is essentially all to be carried out on rest image segmentation Research foundation.Image Segmentation refers to piece image being divided into multiple regions, each region is the set of a kind of pixel by certain rule.It is to work as that figure, which is cut, Preceding image segmentation is main and most basic method, this method are based on graph theory, construct an energy function, marked by user Fixed prospect carrys out segmentation figure picture with background.The energy function constructed can realize the overall situation using max-flow/minimal cut algorithm Optimum segmentation.
Video segmentation is different from the introducing that the main part that rest image is split is movable information.Video segmentation according to Whether need artificial participation to instruct, unsupervised Video segmentation and semi-supervised Video segmentation can be divided into.According to utilized information Difference, the Video segmentation based on temporal information can be divided into, Video segmentation and joint spatial-temporal information based on spatial information Video segmentation.
The content of the invention
For the deficiency present in current video dividing method, the purpose of the present invention be based on conventional video partitioning algorithm with Super-pixel algorithm, propose what the multiple features such as a kind of new time, space characteristics, mixed Gauss model, significant characteristics were combined Unsupervised Video Segmentation.This method, to improve efficiency and segmentation accuracy, is drawn on the basis of conventional video dividing method Enter the information such as the color characteristic of super-pixel and the motion association of object, in the use of temporal information, be no longer bound to adjacent Information transmission between frame, the robustness of algorithm is improved using the non-local information of video sequence, while representing super-pixel Color characteristic selection on make optimization, some new color characteristics are introduced on the basis of traditional RGB color feature, from And the characteristic dimension for representing each super-pixel is improved, improve segmentation precision, Optimized Segmentation result, for utilizing super picture merely Element carries out the problem of segmentation can cause marginal error big, introduces mixed Gauss model again and carries out pixel class optimization, forms multilayer The prioritization scheme of level, super-pixel segmentation and pixel is split mutual supplement with each other's advantages, effectively improve segmentation accuracy.
To achieve these goals, the present invention is achieved by the following technical solutions:
The unsupervised methods of video segmentation that space-time multiple features represent is merged, including:The video sequence of segmentation needed for obtaining, profit Video sequence is handled with super-pixel segmentation, carrying out front and rear frame information using light stream matches, according to the information of video sequence consecutive frame The scope for obtaining moving target initializes input as graph model, and matching result is optimized using global information, establishes figure Model and the segmentation result that the preliminary super-pixel grade of Algorithm for Solving is cut using figure, Gaussian Mixture mould is trained using primary segmentation result Shape parameter, pixel class segmentation result is solved, using significant characteristics segmentation result is taken, with reference to existing super-pixel and pixel etc. Level segmentation result, final segmentation result, and the segmentation output of final gained moving target are obtained using the mode of ballot.
Concretely comprise the following steps:
1) super-pixel segmentation is carried out to all frames in video sequence, reduces computation complexity, improve algorithm process speed;
2) characteristic mean of each super-pixel, center position are calculated;The characteristic item of each super-pixel with an octuple to Measure R, G, B, H, S, V, x, y represents;
3), can not be using only light stream accurate judgement target location, therefore due to the inaccuracy of optical flow method result of calculation With reference to optical flow method and the method for ballot, the approximate location scope of moving target is calculated, while judge belonging to each super-pixel Region, prospect or background, acquired results will input for the initialization of graph model;
4) information provided using optical flow method calculates the contact between consecutive frame super-pixel, finds out n-th frame and (n+1)th Mutually corresponding super-pixel combination between frame.
5) video sequence after being completed for the matching of all super-pixel, one is calculated to each super-pixel of each frame New non local super-pixel characteristic value, is optimized to former super-pixel;Work as n<When=5, from preceding n-1 frames picture in the frame Each super-pixel optimizes calculating, n>When 5, it is optimized from five frames before the frame;
6) graph model is established, the graph model is made up of unitary potential function and mutual potential function;Unitary potential function includes color Characteristic item and position feature item, mutual potential function include time smoothing item and space smoothing item;
7) cost function of graph model is calculated with the (n+1)th frame super-pixel information using the n-th frame super-pixel information after optimization, Cut using figure and max-flow min-cut algorithm iteration calculated up to convergence, obtain optimal super-pixel grade target segmentation result, Rejudge each super-pixel and belong to prospect or background;
8) the super-pixel Multi-level segmentation result obtained for power, it is used to train mixed Gaussian mould as priori conditions Each parameter of type, and input picture is split again using the mixed Gauss model trained, obtain point of pixel grade Cut result;
9) significant characteristics analysis is carried out to input image sequence, extraction significance probability is more than threshold value T part, as Conspicuousness segmentation result exports;
10) to obtained super-pixel Multi-level segmentation result, pixel class segmentation result, significant characteristics segmentation result, Comprehensive analysis utilization is carried out to it using the mode of ballot, final video object segmentation result is obtained and exports.
The beneficial effects of the invention are as follows:(1) information transmission for being utilized Video Segmentation is generalized to the overall situation, using more Frame information optimizes, and significantly improves the robustness of algorithm, has reached good denoising effect.(2) each super picture will be represented The characteristic value dimension of element expands to octuple, and segmentation accuracy is significantly improved in the case where having substantially no effect on computation complexity. (3) segmentation of super-pixel grade and the segmentation of pixel class are combined, super-pixel segmentation speed is compensate for soon but edge segmentation is accurate The problem of exactness is low.(4) significant characteristics are introduced, the robustness of splitting scheme is further improved using the mode of ballot.
Brief description of the drawings
The general structure schematic diagram of Fig. 1 this method.
The nearest neighbor search optimization super-pixel characteristic value flow chart of Fig. 2 this method.
Embodiment
Below in conjunction with Figure of description, the present invention is further illustrated, to make those skilled in the art's reference say Bright book word can be implemented according to this.
As shown in figure 1, the present invention provides a kind of unsupervised methods of video segmentation based on the study of non local space-time characteristic, bag The video sequence of segmentation needed for obtaining is included, video sequence is handled using super-pixel segmentation, front and rear frame information is carried out using light stream Match somebody with somebody, according to the Optic flow information of video sequence consecutive frame obtain moving target approximate range, using non local space time information to Optimized with result, establish graph model, solved and export super-pixel Multi-level segmentation result, utilize super-pixel Multi-level segmentation result Mixed Gauss model is trained as priori conditions, pixel etc. is carried out to input picture using the mixed Gauss model that training is completed The segmentation of level, using significant characteristics segmentation result, pixel class segmentation result and super-pixel Multi-level segmentation result are voted, Obtain final video object segmentation result;Described input video processing, will by by the video input system of required segmentation Video is stored as being available for the single frames sequence of pictures of processing;Pending sequence of pictures is done super-pixel by described super-pixel segmentation module Dividing processing, it is easy to subsequent algorithm to use, reduces computation complexity;It is right between consecutive frame that the light stream matching module is used to matching The super-pixel block answered, and ask for the approximate range of moving target;The graph model includes unitary potential function and mutual potential function, is used for Mathematical modeling is carried out to pending image, it is converted into the model that figure can be utilized to cut Algorithm for Solving minimum;The mixing Gauss model training module includes two components of each pixel position feature and color characteristic, and it is super-pixel that it, which trains priori conditions, The object segmentation result of grade;The voting scheme synthesis selection figure cuts the super-pixel segmentation result of algorithm, mixed Gauss model Pixel segmentation result and each frame picture significant characteristics segmentation result, obtain final moving Object Segmentation result and export.
As shown in Fig. 2 nearest neighbor search optimization super-pixel characteristic value uses five two field pictures before target frame, to target frame In certain objectives super-pixel, in the set that all super-pixel are formed in five frames before, utilize KD tree algorithms search Its arest neighbors, immediate five arest neighbors super-pixel therewith are found out, it is European with target super-pixel according to it to each arest neighbors Its different weights is assigned apart from size, weighted optimization is done to target super-pixel, is utilized the new super of non local characteristic optimization Pixel, the target super-pixel after renewal are identical with the positional information of former super-pixel.
The general principle, main features and advantages of this method have been shown and described above.The technical staff of the industry should Understand, the design is not restricted to the described embodiments, the original for simply illustrating the design described in above-described embodiment and specification Reason, on the premise of the design spirit and scope are not departed from, the design also has various changes and modifications, these changes and improvements Both fall within the range of claimed the design.The protection domain of the design requirement is by appended claims and its equivalent Boundary.

Claims (1)

1. merge the unsupervised methods of video segmentation that space-time multiple features represent, it is characterised in that including as follows:Split needed for obtaining Video sequence, handle video sequence using super-pixel segmentation, carrying out front and rear frame information using light stream matches, according to video sequence The scope of the acquisition of information moving target of consecutive frame is initialized as graph model and inputted, and matching result is carried out using global information Optimization, is established graph model and the segmentation result of the preliminary super-pixel grade of Algorithm for Solving is cut using figure, is instructed using primary segmentation result Practice gauss hybrid models parameter, solve pixel class segmentation result, using significant characteristics segmentation result is taken, with reference to existing super Pixel and pixel class segmentation result, final segmentation result, and final gained moving target are obtained using the mode of ballot Segmentation output;Comprise the following steps that:
1) super-pixel segmentation is carried out to all frames in video sequence, reduces computation complexity, improve algorithm process speed;
2) characteristic mean of each super-pixel, center position are calculated;One octuple vector R of the characteristic item of each super-pixel, G, B, H, S, V, x, y are represented;
3), can not be using only light stream accurate judgement target location due to the inaccuracy of optical flow method result of calculation, therefore combine Optical flow method and the method for ballot, calculate the approximate location scope of moving target, while judge the area belonging to each super-pixel Domain, prospect or background, acquired results will input for the initialization of graph model;
4) information provided using optical flow method calculates the contact between consecutive frame super-pixel, find out n-th frame and the (n+1)th frame it Between mutually corresponding to super-pixel combination;
5) video sequence after being completed for the matching of all super-pixel, each super-pixel of each frame is calculated one it is new Non local super-pixel characteristic value, is optimized to former super-pixel;Work as n<When=5, from preceding n-1 frames picture to each in the frame Individual super-pixel optimizes calculating, n>When 5, it is optimized from five frames before the frame;
6) graph model is established, the graph model is made up of unitary potential function and mutual potential function;Unitary potential function includes color characteristic Item and position feature item, mutual potential function include time smoothing item and space smoothing item;
7) cost function of graph model is calculated with the (n+1)th frame super-pixel information using the n-th frame super-pixel information after optimization, is utilized Figure is cut and max-flow min-cut algorithm iteration is calculated up to convergence, obtains optimal super-pixel grade target segmentation result, i.e., heavy Newly judge that each super-pixel belongs to prospect or background;
8) the super-pixel Multi-level segmentation result obtained for power, it is used to train mixed Gauss model each as priori conditions Parameter, and input picture is split again using the mixed Gauss model trained, obtain the segmentation knot of pixel grade Fruit;
9) significant characteristics analysis is carried out to input image sequence, extraction significance probability is more than threshold value T part, as notable Property segmentation result output;
10) to obtained super-pixel Multi-level segmentation result, pixel class segmentation result, significant characteristics segmentation result, utilize The mode of ballot carries out comprehensive analysis utilization to it, obtains final video object segmentation result and exports.
CN201710810535.9A 2017-09-11 2017-09-11 Merge the unsupervised methods of video segmentation that space-time multiple features represent Pending CN107657625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710810535.9A CN107657625A (en) 2017-09-11 2017-09-11 Merge the unsupervised methods of video segmentation that space-time multiple features represent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710810535.9A CN107657625A (en) 2017-09-11 2017-09-11 Merge the unsupervised methods of video segmentation that space-time multiple features represent

Publications (1)

Publication Number Publication Date
CN107657625A true CN107657625A (en) 2018-02-02

Family

ID=61129216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710810535.9A Pending CN107657625A (en) 2017-09-11 2017-09-11 Merge the unsupervised methods of video segmentation that space-time multiple features represent

Country Status (1)

Country Link
CN (1) CN107657625A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447082A (en) * 2018-08-31 2019-03-08 武汉尺子科技有限公司 A kind of scene motion Target Segmentation method, system, storage medium and equipment
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervisory learning model training method and device based on relation inference
CN110047089A (en) * 2019-04-03 2019-07-23 浙江工业大学 One kind being based on the matched pattern matching method of texture block
CN110111338A (en) * 2019-04-24 2019-08-09 广东技术师范大学 A kind of visual tracking method based on the segmentation of super-pixel time and space significance
CN110245567A (en) * 2019-05-16 2019-09-17 深圳前海达闼云端智能科技有限公司 Barrier-avoiding method, device, storage medium and electronic equipment
CN110390293A (en) * 2019-07-18 2019-10-29 南京信息工程大学 A kind of Video object segmentation algorithm based on high-order energy constraint
CN111161307A (en) * 2019-12-19 2020-05-15 深圳云天励飞技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111783497A (en) * 2019-04-03 2020-10-16 北京京东尚科信息技术有限公司 Method, device and computer-readable storage medium for determining characteristics of target in video
CN113489896A (en) * 2021-06-25 2021-10-08 中国科学院光电技术研究所 Video image stabilization method capable of robustly predicting global motion estimation
CN113570640A (en) * 2021-09-26 2021-10-29 南京智谱科技有限公司 Video image processing method and device
CN116030396A (en) * 2023-02-27 2023-04-28 温州众成科技有限公司 Accurate segmentation method for video structured extraction

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123529A (en) * 2013-04-25 2014-10-29 株式会社理光 Human hand detection method and system thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123529A (en) * 2013-04-25 2014-10-29 株式会社理光 Human hand detection method and system thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAIHUAZHANG等: "Unsupervised Video Segmentation via Spatio-Temporally Nonlocal Appearance Learning", 《COMPUTER SCIENCE - COMPUTER VISION AND PATTERN RECOGNITION》 *
谢伙生: "基于超像素的Grab cut前景提取算法", 《福州大学学报(自然科学版)》 *
邓朔: "基于预分割信息融合的快速图割算法研究", 《万方数据知识服务平台》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447082B (en) * 2018-08-31 2020-09-15 武汉尺子科技有限公司 Scene moving object segmentation method, system, storage medium and equipment
CN109447082A (en) * 2018-08-31 2019-03-08 武汉尺子科技有限公司 A kind of scene motion Target Segmentation method, system, storage medium and equipment
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervisory learning model training method and device based on relation inference
CN109886345B (en) * 2019-02-27 2020-11-13 清华大学 Self-supervision learning model training method and device based on relational reasoning
CN110047089A (en) * 2019-04-03 2019-07-23 浙江工业大学 One kind being based on the matched pattern matching method of texture block
CN111783497A (en) * 2019-04-03 2020-10-16 北京京东尚科信息技术有限公司 Method, device and computer-readable storage medium for determining characteristics of target in video
CN110111338A (en) * 2019-04-24 2019-08-09 广东技术师范大学 A kind of visual tracking method based on the segmentation of super-pixel time and space significance
CN110245567A (en) * 2019-05-16 2019-09-17 深圳前海达闼云端智能科技有限公司 Barrier-avoiding method, device, storage medium and electronic equipment
CN110390293A (en) * 2019-07-18 2019-10-29 南京信息工程大学 A kind of Video object segmentation algorithm based on high-order energy constraint
CN111161307A (en) * 2019-12-19 2020-05-15 深圳云天励飞技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111161307B (en) * 2019-12-19 2023-04-18 深圳云天励飞技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN113489896A (en) * 2021-06-25 2021-10-08 中国科学院光电技术研究所 Video image stabilization method capable of robustly predicting global motion estimation
CN113570640A (en) * 2021-09-26 2021-10-29 南京智谱科技有限公司 Video image processing method and device
CN116030396A (en) * 2023-02-27 2023-04-28 温州众成科技有限公司 Accurate segmentation method for video structured extraction
CN116030396B (en) * 2023-02-27 2023-07-04 温州众成科技有限公司 Accurate segmentation method for video structured extraction

Similar Documents

Publication Publication Date Title
CN107657625A (en) Merge the unsupervised methods of video segmentation that space-time multiple features represent
Fu et al. Onboard real-time aerial tracking with efficient Siamese anchor proposal network
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN111428765B (en) Target detection method based on global convolution and local depth convolution fusion
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN112560656A (en) Pedestrian multi-target tracking method combining attention machine system and end-to-end training
CN109447082B (en) Scene moving object segmentation method, system, storage medium and equipment
CN113706581B (en) Target tracking method based on residual channel attention and multi-level classification regression
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
CN107895379A (en) The innovatory algorithm of foreground extraction in a kind of video monitoring
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN115205730A (en) Target tracking method combining feature enhancement and template updating
CN109697727A (en) Method for tracking target, system and storage medium based on correlation filtering and metric learning
CN107016675A (en) A kind of unsupervised methods of video segmentation learnt based on non local space-time characteristic
CN112560651B (en) Target tracking method and device based on combination of depth network and target segmentation
Qin et al. Advanced intersection over union loss for visual tracking
Xiong et al. Domain adaptation of object detector using scissor-like networks
Ma et al. Depth-guided progressive network for object detection
Wang et al. Thermal infrared object tracking based on adaptive feature fusion
Tian et al. Lightweight dual-task networks for crowd counting in aerial images
Wang et al. Improved multi-domain convolutional neural networks method for vehicle tracking
Feng et al. Multi-Correlation Siamese Transformer Network with Dense Connection for 3D Single Object Tracking
Ren et al. Robust multiple object mask propagation with efficient object tracking
Yang et al. Vess: Variable event stream structure for event-based instance segmentation benchmark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180202

WD01 Invention patent application deemed withdrawn after publication