GB2522259A - A method of object orientation detection - Google Patents

A method of object orientation detection Download PDF

Info

Publication number
GB2522259A
GB2522259A GB1400941.9A GB201400941A GB2522259A GB 2522259 A GB2522259 A GB 2522259A GB 201400941 A GB201400941 A GB 201400941A GB 2522259 A GB2522259 A GB 2522259A
Authority
GB
United Kingdom
Prior art keywords
classifiers
image
orientation
determining
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1400941.9A
Other versions
GB2522259B (en
GB201400941D0 (en
Inventor
Ilya Romanenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apical Ltd
Original Assignee
Apical Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apical Ltd filed Critical Apical Ltd
Priority to GB1400941.9A priority Critical patent/GB2522259B/en
Publication of GB201400941D0 publication Critical patent/GB201400941D0/en
Priority to US14/601,095 priority patent/US9483827B2/en
Publication of GB2522259A publication Critical patent/GB2522259A/en
Application granted granted Critical
Publication of GB2522259B publication Critical patent/GB2522259B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The responses of at least two classifiers (see fig. 4) in at least one region of the image (fig. 2) corresponding to the object are determined. The classifiers (e.g. Haar classifier, k-nearest neighbours algorithm, support vector machine) are trained to identify a given object in different specific orientations. The orientation of the object in the image is determined as an average of the specific orientations, weighted by the responses of their respective classifiers. The specific orientations of classifiers having a response smaller than zero may be excluded. The at least one region may be identified using a face detection algorithm. A cluster of regions may be identified for the object, the cluster comprising at least two regions with a response larger than zero for at least one classifier, the regions having different sizes and/or positions within the image, and in which the weighted average is calculated over all regions of the cluster.

Description

A METHOD OF OBJECT ORIENTATION DETECTION
Technical Field
The present invention relates to a method of determining the angle of orientation of an object in an image.
Background
It is frequently desirable to estimate the angle or orientation of an object in an image or video sequence with respect to the camera. For example, the ability of a robotic hand to grasp a three-dimensional object accurately depends on its ability to estimate the relative orientation of that object.
Various methods for determining the orientation angle of an object are known in the art, For example, these methods may extract a sparse representation of an object as a collection of features such as edges and corners, and then analyse the relative orientation of these features to determine an overall orientation angle for the object.
However, these techniques are often not robust to variations in object shape and topography, or to variations in image quality such as noise and non-uniform illumination.
Summary
According to a first aspect of the present invention, there is provided a method of determining an orientation of an object within an image, the method comprising: determining responses of at least two classifiers in a region of the image corresponding to the object, the classifiers having been trained to identify a given object in different specific orientations; determining the orientation of the object as an average of the specific known orientations, weighted by the responses of their respective classifiers.
The method classifies a region of an image according to classifiers. Each classifier is trained to detect an object in a specific orientation, the orientations of different classifiers usually being different. The application of the classifiers to the region produces a response for each orientation, The responses are then used to produce a weighted average ofthe various orientations. The resultant weighted average is a more accurate determination of the orientation than typically achievable by known methods.
The determined orientation is robust to variations in object shape and topography and has a reduced sensitivity to variations in image quality.
The invention further relates to an apparatus for carrying out the method and a computer program for determining the orientation, which may be implemented in hardware of software in a camera or computer.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 shows a method for determining the orientation of an object.
Figure 2 shows an image containing two objects.
Figure 3 shows Haar classifiers for vertical and horizontal lines.
Figure 4 shows face detection using classifiers trained to detect different orientation angles.
Figure 5 shows a cluster of multiple regions corresponding to one object.
Figure 6 shows face orientation determination using a weighted sum of multiple classifiers.
Figure 7 shows an apparatus for implementing an object orientation detection method,
Detailed Description
Object identification and classification is the process by which the presence of an object in an image may be identified, and by which the object may be determined to belong to a given class of object. An example of such an object is a human face in an image of a group of people, which may be determined to belong to the class of human faces. The object may be grouped in for example three classes: a face oriented to the left, to the right, and to the front of the image. An object in an image may be identified and classified using one of many methods, well known to those skilled in the art, Such methods include face detection algorithms, histograms of oriented gradients, and
background segmentation,
Figure 1 shows schematically a method according to one embodiment, in which the orientation of an object in an image may be determined, A region in the image may be detennined to correspond to the obj ect 101. The response of two or more classifiers in this region may then be determined 102, the classifiers having been previously S trained on images of obj ects in different orientations, The orientation of the object may then be determined 103 as a weighted average of the orientations for which the classifiers have been trained, the orientations being weighted by the corresponding classifier responses. The orientation may be expressed for example as an angle with respect to a predetermined direction, such as the viewing direction, or as a vector.
Figure 2 shows an image 201 containing objects 202, 203, which may be human faces. An identification and classification method may analyse multiple regions 204, 205, 206 within the image with a previously trained classifier, to determine how closely they correspond to objects on which the classifier has been trained. The regions may be obtained by systematically scanning over the image, producing multiple regions of varying size and position. A region where the classifier has a response larger than zero, may be a region in which an object on which the classifier has been trained, is present.
As an example, a first, second and third classifier may have been trained on images containing faces oriented towards the lefi, the right and the front of the image, respectively. If a region gives a positive response for at least one of these classifiers, the region will include a face, Hence, region 206 in figure 2, which does not contain a face, will not give a positive response for any of the classifiers. The regions 204 and 205 include a face and will give a response larger than zero for at least one of the classifiers. If the object 202 is a face oriented to the lefi and the object 203 a face oriented to the right, the first classifier will determine a higher degree of correspondence for the region 204 which corresponds to a face oriented to the left than for the region 205 and the region 206 which do not contain a face oriented to the left In this manner, the identification and classification method may identify an object 202 as being located within a region 204, and classify it as a face oriented to the left.
According to one embodiment, a human face in an image may be identified using a facial detection routine employing a Haa.r classification scheme, This method involves the analysis of a region of an image with a previously trained classifier, to determine a response. For example, the entire image or an identified region may be divided into multiple zones, for example a grid of zones, with the response of feature detectors being determined in each zone. The feature detectors may, for example, be Haar edge detectors corresponding to edges at various angles. Figure 3 shows two Haar classifiers, one for detection of vertical 301 and one for detection of horizontal 302 edges. These are compared to a zone within a region; for example the vertical edge Haar classifier 301 will be more similar to a zone containing a vertical edge between a light region and a dark region than to a zone not containing such an edge.
Another type of classifier that can be used in the method is a support vector machine. A support vector machine is a known kind of classifier or algorithm used to compare part of an image with a trained template and measuring the overlap to decide if the object is there or not, as described in for example the article Support-Vector Networks' by Cortes and Vapnik, Machine Learning, 20, 273-297 (1995), Kluwer Academic Publishers.
The response of each feature detector may then be compared to an expected weight for that feature detector in that zone, the expected weights being obtained by training the classification routine on images containing known objects in specific known orientations. An overall response can then be calculated, which indicates the degree of correspondence between the responses of the feature detectors and their expected weights in each zone. Thus the classifier may comprise multiple feature detectors.
A known method of face detection use a Haar classification scheme, which forms the basis for methods such as that due to Viola and Jones. Such a method may typically involve the detection of a face in one or more poses, in this case a face directed to the left, to the right, and to the front. The method compares a region of an image with a previously trained classifier to obtain a response which is in turn used to decide whether a face is present in one of the target poses.
This response to a given pose is denoted as It is obtained by defining a rectangular detection window consisting of A'fvAT zones Each zone may cover one or more pixels of the image. In each zone response values R are calculated based on a set of feature detectors. These may be for example Haar edge detectors corresponding to edges at 0 (horizontal), 45, 90 and t3 5 degrees, Ho, H45, IIpo, H135 respectively, such that for example the response at zone (m, n) for i' degrees is R,(tn,n) -(Pony (H,, Image (tn/i)) where (lony is a convolution, H1 is the Haar filter kernel, and Jrnage is the image luminance data for the zone (in, n).
The training of the object detector produces a map of expected weights for feature detectors in each zone for a chosen pose: {M05.o('n,n), M0,co"m,n,.) and Mpo,jss('m,n,)J, Typically three poses are trained for: Front, Left and Right.
In each zone (m,n) a score P is assigned for each feature, representing the likelihood that the feature is present, with increasingly large positive values indicating an increasing likelihood that the feature is present, while increasingly negative values indicating an increasing likelihood that the feature is not present: (m it) = (m, n) * m, it) Finally, the response of a trained object detector within the given detection window is Is = Ppose(m,n) m,n The detection window may cover the entire image or a selected part of the image.
In a typical method a cluster of detections is used to detect an individual object in the image, each detection corresponding to a different offset of the window position with respect to the object and/or a different size scale as further described below.
The classification scheme may alternatively use, for example, the well-known techniques of an AdaBoost algorithm, a support vector machine, or a k-nearest neighbours algorithm.
The response quantifies the degree of correspondence between the region of the image and the trained object. For an example, the method may employ classifiers for faces directed to the front, to the left and to the right of the image. Figure 4 shows three orientations of a head, namely right-facing 40], front-facing 402, and left-facing 403.
In each case a classifier corresponding to the relevant orientation has been used to identify a region 404, 405, 406 of highest response corresponding to a face in that orientation.
According to some embodiments, several regions may correspond to a single object in the image, with the various regions being offset from each other in position and size within the image. This is shown in figure 5, which depicts an image 50] containing an elliptical object 502, which may for example be a human face. The region 503 may have the highest classifier response, but typically a high response, i.e a response larger than zero, will also be observed for multiple other regions 504 which S are close in size and position. These multiple regions with high response may be taken together as a cluster' The weighted average is calculated over all regions of the cluster, which may improve the accuracy of the determined orientation.
In the prior art, such a response determines the presence or absence of a particular object: a response > 0 is typically interpreted as a detected object while a response < 0 is typically interpreted as the absence of the object. For example, a response < 0 for all classifiers would imply that no face is present, whereas a response > U for the lefi-facing classifier and response < 0 for the right-facing and front-facing classifiers would imply that a face is present and oriented towards the left of the image.
However, this may not be sufficiently accurate: for example, manipulation of a robot hand to grasp an object may require an accuracy of the orientation within a few degrees.
The accuracy may be improved relative to that achieved by known methods by constructing a weighted average of the orientation angles of each classifier, the orientation angles being weighted by the response of that classifier, A specific orientation for which a dassifier has been trained may be referred to as a pose', for example left-facing arid right-facing. For a number of poses, each pose having orientation angle weighted by a coresponding classifier response Spose, this weighted average A may be expressed mathematically as A = °pose pose S05 where the sums exclude poses where S0 0; i.e. poses not present are excluded from the weighted averaging operation. In other words, the weighted average excludes specific orientations of classifiers having a response smaller than zero. The parameter A is the orientation of the object as determined by the method, expressed as an angle.
In the embodiment in which a cluster of multiple regions is identified for each image, all of the regions within each cluster may be included within the weighted average. If the response of the j" member of a cluster of multiple detections corresponding to a given pose is termed the weighted average over all poses may be expressed mathematically as -pose,j 5pose,j 6pose pose,j 5pose.j where the sums exclude the poses where 0.
A weighted average constructed in this manner may typically be accurate as an estimate of the orientation angle to within a few degrees. Figure 6 shows an image of a head 601 oriented partially to the front and partially to the right of the image. Classifiers corresponding to different orientations have been used to identify multiple regions 602 corresponding to high responses for different orientation angles. The orientation of the head 601 lying within the region of strongest response 603 may thus be determined as a weighted average. In this case, the orientation angle is determined to be 68 degrees.
An apparatus for carrying out the above described method is shown in figure 7, An image is input 701 to a processor 702 and a memory 703 which includes computer program instructions. The instructions are configured to cause the processor to tS determine the orientation of an object in the image in the manner described above. The orientation is then output 704. The apparatus may for example be implemented in a camera or computer, and the image may be input from a camera sensor or from a memory. The output may for example be to a screen, or stored in memory.
The invention may be implemented in a computer program product comprising a non-transitory computer readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to determine the orientation of an object in an image in the manner described above.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, the method is not only applicable to facial detection algorithms, but may be used to determine the orientation of any object which classifiers can be trained to detect in two or more orientations. The method can be used for determining the orientation of a single object in an image but also for the orientation of multiple objects in an image. The images may be still images or frames of a video. In the latter case the method can be used to provide a time evolution of the orientation of an object in the video. The invention may also be implemented in hardware or software, for example in a camera or computer. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (10)

  1. Claims 1. A method of determining an orientation of an object within an image, the method comprising: determining responses of at least two classifiers in at least one region of the image corresponding to the obj ect, the ci assifiers having been trained to identify a given object in different specific orientations; determining the orientation of the object as an average of the specific orientations, weighted by the responses of their respective classifiers.
  2. 2. A method according to claim 1, wherein the determining excludes specific orientations of classifiers having a response smaller than zero.
  3. 3, A method according to claim I or 2, comprising identifying the at least one region as a region with a response larger than zero of at least one classifier.
  4. 4. A method according to claim 1, 2 or 3, comprising identifiing the at least one region using a face detection algorithm.
  5. 5. A method according to any one of the preceding claims, comprising identifying a cluster of regions for the object, the cluster comprising at least two regions with a response larger than zero for at least one classifier, the regions having different sizes and/or different positions within the image; and in which the weighted average is calculated over all regions of the cluster.
  6. 6. A method according to any one of the preceding claims, in which at least one of the classifiers is a Haar classifier.
  7. 7. A method according to any one of the preceding claims, in which at least one of the classifiers is a k-nearest neighbours algorithm.
  8. 8. A method according to any one of the preceding claims, in which at least one of the classifiers is a support vector machine.
  9. 9. Apparatus for processing an image, the apparatus comprising: at least one processor; and at least one memory including computer program instructions, the at least one memory and the computer program instructions being configured to, with the at least one processor, cause the apparatus to perform: a method of determining an orientation of an object within an image, the method comprising: determining responses of at least two dassifiers in at least one region of the image corresponding to the object, the classifiers having been trained to identify a given object in different specific orientations; determining the orientation of the object as an average of the specific orientations, weighted by the responses of their respective classifiers.
  10. 10. A computer program product comprising a non-transitory computer readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to perform a method of determining an orientation of an ob.j ect within an image, the method comprising: determining responses of at least two classifiers in at least one region of the image corresponding to the object, the classifiers having been trained to identify a given object in different specific orientations; determining the orientation of the object as an average of the specific orientations, weighted by the responses of their respective classifiers.
GB1400941.9A 2014-01-20 2014-01-20 A method of object orientation detection Expired - Fee Related GB2522259B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1400941.9A GB2522259B (en) 2014-01-20 2014-01-20 A method of object orientation detection
US14/601,095 US9483827B2 (en) 2014-01-20 2015-01-20 Method of object orientation detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1400941.9A GB2522259B (en) 2014-01-20 2014-01-20 A method of object orientation detection

Publications (3)

Publication Number Publication Date
GB201400941D0 GB201400941D0 (en) 2014-03-05
GB2522259A true GB2522259A (en) 2015-07-22
GB2522259B GB2522259B (en) 2020-04-29

Family

ID=50239199

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1400941.9A Expired - Fee Related GB2522259B (en) 2014-01-20 2014-01-20 A method of object orientation detection

Country Status (2)

Country Link
US (1) US9483827B2 (en)
GB (1) GB2522259B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892301B1 (en) * 2015-03-05 2018-02-13 Digimarc Corporation Localization of machine-readable indicia in digital capture systems
CN107865473B (en) * 2016-09-26 2019-10-25 华硕电脑股份有限公司 Characteristics of human body's range unit and its distance measuring method
CN112825145B (en) * 2019-11-20 2022-08-23 上海商汤智能科技有限公司 Human body orientation detection method and device, electronic equipment and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09145323A (en) * 1995-11-20 1997-06-06 Nec Robotics Eng Ltd Recognition apparatus for position and direction
US20090297038A1 (en) * 2006-06-07 2009-12-03 Nec Corporation Image Direction Judging Device, Image Direction Judging Method and Image Direction Judging Program

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915025B2 (en) * 2001-11-27 2005-07-05 Microsoft Corporation Automatic image orientation detection based on classification of low-level image features
US7194114B2 (en) * 2002-10-07 2007-03-20 Carnegie Mellon University Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
KR100643303B1 (en) * 2004-12-07 2006-11-10 삼성전자주식회사 Method and apparatus for detecting multi-view face
US8515126B1 (en) * 2007-05-03 2013-08-20 Hrl Laboratories, Llc Multi-stage method for object detection using cognitive swarms and system for automated response to detected objects
US7848548B1 (en) * 2007-06-11 2010-12-07 Videomining Corporation Method and system for robust demographic classification using pose independent model from sequence of face images
US7684954B2 (en) * 2007-12-31 2010-03-23 Intel Corporation Apparatus and method for classification of physical orientation
JP5848551B2 (en) * 2011-08-26 2016-01-27 キヤノン株式会社 Learning device, learning device control method, detection device, detection device control method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09145323A (en) * 1995-11-20 1997-06-06 Nec Robotics Eng Ltd Recognition apparatus for position and direction
US20090297038A1 (en) * 2006-06-07 2009-12-03 Nec Corporation Image Direction Judging Device, Image Direction Judging Method and Image Direction Judging Program

Also Published As

Publication number Publication date
US20150206311A1 (en) 2015-07-23
GB2522259B (en) 2020-04-29
GB201400941D0 (en) 2014-03-05
US9483827B2 (en) 2016-11-01

Similar Documents

Publication Publication Date Title
CN110543837B (en) Visible light airport airplane detection method based on potential target point
JP6032921B2 (en) Object detection apparatus and method, and program
US9008365B2 (en) Systems and methods for pedestrian detection in images
US9367758B2 (en) Feature extraction device, feature extraction method, and feature extraction program
US9294665B2 (en) Feature extraction apparatus, feature extraction program, and image processing apparatus
JP6345147B2 (en) Method for detecting an object in a pair of stereo images
Sujatha et al. Performance analysis of different edge detection techniques for image segmentation
Choi et al. Fast human detection for indoor mobile robots using depth images
CN104077594B (en) A kind of image-recognizing method and device
JP6351243B2 (en) Image processing apparatus and image processing method
JP5671928B2 (en) Learning device, learning method, identification device, identification method, and program
US9501823B2 (en) Methods and systems for characterizing angle closure glaucoma for risk assessment or screening
JP2016015045A (en) Image recognition device, image recognition method, and program
Shajahan et al. Identification and counting of soybean aphids from digital images using shape classification
JP2013206458A (en) Object classification based on external appearance and context in image
JP2015032001A (en) Information processor and information processing method and program
JP2021069793A5 (en)
US9483827B2 (en) Method of object orientation detection
JP2015148895A (en) object number distribution estimation method
KR101542206B1 (en) Method and system for tracking with extraction object using coarse to fine techniques
JP6647134B2 (en) Subject tracking device and program thereof
KR101696086B1 (en) Method and apparatus for extracting object region from sonar image
Mattheij et al. Depth-based detection using Haarlike features
Mittal et al. Face detection and tracking: a comparative study of two algorithms
Tatarenkov et al. Feature extraction from a depth map for human detection

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20220929 AND 20221005

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20230120