CN111382651A - Data marking method, computer device and computer readable storage medium - Google Patents

Data marking method, computer device and computer readable storage medium Download PDF

Info

Publication number
CN111382651A
CN111382651A CN201811653221.3A CN201811653221A CN111382651A CN 111382651 A CN111382651 A CN 111382651A CN 201811653221 A CN201811653221 A CN 201811653221A CN 111382651 A CN111382651 A CN 111382651A
Authority
CN
China
Prior art keywords
head
preset
face
pictures
marking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811653221.3A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
肖森林
陈九思
张洁
赵金玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kuang Chi Institute of Advanced Technology
Original Assignee
Hangzhou Guangqi Artificial Intelligence Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Guangqi Artificial Intelligence Research Institute filed Critical Hangzhou Guangqi Artificial Intelligence Research Institute
Priority to CN201811653221.3A priority Critical patent/CN111382651A/en
Publication of CN111382651A publication Critical patent/CN111382651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the invention discloses a data marking method, a computer device and a computer readable storage medium, wherein the data marking method comprises the following steps: acquiring a video containing a plurality of characters; performing image cutting processing on the video containing the multiple people to obtain a preset number of head or face pictures of each person; and screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics in the head or face pictures with the preset number of each person, and labeling one or more screened head or face pictures. The data marking method in the embodiment of the invention can improve the accuracy of the face recognition algorithm, improve the robustness of the face recognition algorithm, reduce the omission factor and the like.

Description

Data marking method, computer device and computer readable storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to a data marking method, a computer device, and a computer-readable storage medium.
Background
In recent years, big data and neural network technologies have been rapidly developed, and deep learning based on big data and neural networks has also been rapidly applied to many fields. The neural network technology has strong feature extraction capability on image data, so that the application of deep learning to face recognition quickly becomes a hotspot research direction, and the capability of face recognition can be continuously optimized and improved along with the deep learning of big data. Obviously, the data is an important basis for the deep learning of the face recognition.
However, in the prior art, the face recognition algorithm has the problems of low accuracy, low robustness and high missing rate.
Disclosure of Invention
In order to solve the above technical problem, an aspect of the embodiments of the present invention provides a data marking method, where the data marking method includes:
acquiring a video containing a plurality of characters;
performing image cutting processing on the video containing the multiple people to obtain a preset number of head or face pictures of each person;
and screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics in the head or face pictures with the preset number of each person, and labeling one or more screened head or face pictures.
Further, in the data marking method, the preset wearing object includes glasses, sunglasses or a hat, the preset light characteristic includes a strong light characteristic or a weak light characteristic, and the preset weather characteristic includes rainy weather or snowy weather.
Further, the data marking method further comprises the following steps:
deleting any head or face picture which has preset wearing objects in a preset number of head or face pictures of each person and has the proportion that the preset wearing objects shield the heads or the faces being greater than or equal to 30%;
deleting any head or face pictures that are repeated among all head or face pictures of the plurality of persons; and
at least one of all head or face pictures of the plurality of people having any head or face picture with an umbrella feature is deleted.
Further, in the data marking method, the step of screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics from among the preset number of head or face pictures of each person includes:
and screening out the head or face pictures with glasses, sunglasses, hats, strong light, weak light or rainy day characteristics in the preset number of head or face pictures of each person.
Further, the data marking method further comprises the following steps:
a preset number of pictures of the head or face of each person are saved into a single folder.
Further, the data marking method further comprises the following steps:
all the head or face pictures of a plurality of persons in a plurality of individual folders are subjected to a deduplication process to delete any one of the individual folders having duplicate pictures.
Further, in the data marking method, the preset number of face pictures of each person at least include a front face picture and side face pictures, and the side face pictures are in a range of-30 degrees to +30 degrees with the front face picture as a reference.
Further, in the data marking method, the preset number is greater than or equal to 6 and less than 20; or
The preset number is equal to 6.
Another aspect of the embodiments of the present invention provides a computer device, which includes a processor, and the processor is configured to implement the steps of the data marking method described above when executing a computer program stored in a memory.
Yet another aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the data marking method described above.
The embodiment of the invention provides the data marking method, the computer device and the computer readable storage medium, and the data marking method can improve the accuracy of a face recognition algorithm, improve the robustness of the face recognition algorithm, reduce the omission factor and the like.
Drawings
Fig. 1 is a flow chart of a data marking method 100 according to an embodiment of the present invention.
Fig. 2 is a flowchart of a picture deletion method 200 according to an embodiment of the present invention.
Detailed Description
In order to establish a face recognition algorithm training database more reasonably, the patent summarizes a method on the basis of long-term data mining and data labeling experience, and has the corresponding advantages that a data subset under a complex scene is established from a data set, and the data set is perfected to adapt to a special rule of machine learning, so that the accuracy of the face recognition algorithm can be improved, the robustness of the face recognition algorithm can be improved, the omission ratio can be reduced, and the like. To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Fig. 1 is a flow chart of a data marking method 100 according to an embodiment of the present invention. The data marking method 100 includes steps 102, 104, and 106.
Step 102: a video containing a plurality of people is obtained.
Step 104: and performing image cutting processing on the video containing the multiple people to obtain a preset number of head or face pictures of each person. The face may also be understood as a face.
Step 106: and screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics in the head or face pictures with the preset number of each person, and labeling one or more screened head or face pictures.
In one non-limiting example, the preset number of head or face pictures of each person are in a JPEG file format.
In one embodiment of the invention, the preset number is greater than or equal to 6 and less than 20. In a particular embodiment of the invention, said preset number is equal to 6.
In an embodiment of the present invention, the preset wearing object includes glasses, a hat, a scarf, or jewelry, the preset light characteristics include strong light characteristics or weak light characteristics, and the preset weather characteristics include rainy weather, snowy weather, sunny weather, cloudy weather, or the like. Wherein the jewelry may comprise earrings, necklaces, etc.
In an embodiment of the present invention, the step of screening out the head or face pictures with at least one of the preset wearing object, the preset light characteristic and the preset weather characteristic from the preset number of head or face pictures of each person in step 106 includes: and screening out the head or face pictures with glasses, sunglasses, hats, strong light, weak light or rainy day characteristics in the preset number of head or face pictures of each person.
Specifically, in the embodiment of the present invention, the preset number of face pictures of each person at least includes a front face picture and a side face picture, and the side face picture is in a range of-30 degrees to +30 degrees with the front face picture as a reference.
Further, the data marking method 100 further includes: a preset number of pictures of the head or face of each person are saved into a single folder. In this manner, since there are multiple people in the data marking method 100, multiple individual folders of multiple people, each having a preset number of head or face pictures of the corresponding people, may ultimately be obtained, respectively.
Further, the data marking method 100 further includes: all the head or face pictures of a plurality of persons in a plurality of individual folders are subjected to a deduplication process to delete any one of the individual folders having duplicate pictures.
Based on the picture marking method 100 shown in fig. 1, fig. 2 is a flowchart of a picture deleting method 200 according to an embodiment of the present invention. Referring to fig. 2, the picture deletion method 200 includes at least one of step 202, step 204, and step 206.
Step 202: and deleting any head or face picture which has preset wearing objects in a preset number of head or face pictures of each person and has the proportion of the preset wearing objects covering the head or face being greater than or equal to 30% for a plurality of persons.
Step 204: any head or face picture that is repeated among all head or face pictures of a plurality of persons is deleted.
Step 206: any head or face picture having an umbrella-opening feature among all head or face pictures of the plurality of characters is deleted.
Further, in an embodiment of the present invention, the picture deleting method 200 further includes deleting a preset number of pictures of the head or face of each person, which have a strong light or weak light characteristic and the strong light or weak light characteristic causes the head or face picture not to be visible, and/or deleting a preset number of pictures of the head or face of each person, which have a raindrop characteristic and the raindrop characteristic causes the head or face picture not to be visible.
By performing at least one of step 202, step 204, and step 206 of picture deletion method 200, the quality of the picture data, and thus the quality of the data marking operation, may be guaranteed.
The embodiment of the invention provides a method for establishing a training database of a face recognition algorithm, which is used for producing/establishing training data of the face recognition algorithm/deep learning. Compared with other common methods, the method has a great improvement effect, firstly, the advanced experience of quality management is used for reference from the process, and the hidden danger of data quality is eliminated in the process; secondly, data are effectively classified to cover a vast number of application scenes, so that the adaptability of algorithm/deep learning is improved; thirdly, a data set is built in a targeted manner by combining the cognitive rule of machine learning, certain short boards of the algorithm/deep learning are made up, and the accuracy and the robustness of the face recognition algorithm/deep learning are improved.
The following section describes the data marking method 100 of the present invention in a specific application scenario or example.
In the data marking method 100, a method for establishing a face recognition algorithm training database is involved, which includes the following steps:
step 1: and (6) acquiring data.
Step 2: and data processing, which relates to screening and data annotation of images/pictures.
And step 3: and data inspection, namely performing basic quality inspection on one hand and performing deduplication processing on the other hand.
And 4, step 4: and (4) classifying the finished products, storing the finished products in classification, and adding a proper data tag.
The method comprises the following specific steps:
in the data acquisition stage, the acquired video is subjected to image cutting processing by using the script to generate a series of images containing human faces, and meanwhile, the link of manual image cutting is reserved in consideration of task complexity and other special purposes, so that various requirements can be flexibly processed.
And in the data processing stage, a link is specially arranged for ensuring the effectiveness of the pictures, and the pictures produced in the first stage are primarily screened. Then, for the annotation of the picture start data, this link involves more rules.
In the data inspection stage, a set of quality inspection rules are specially designed to ensure the quality of the labeled data. In addition, a duplicate removal link is designed, marked data are analyzed and deduplicated by using a script, and meanwhile, manpower is allocated to supervise and check the deduplication.
And in the finished product classification stage, different data classes are formed by the checked data according to rules, so that the database can be managed and inquired conveniently.
In one specific example, the data marking method 100 shown in fig. 1 includes:
1. data acquisition: the data set directly affects the performance of the algorithm trained on the basis of the data set, and particularly in the field of face recognition, the characteristics of the data set are particularly important. Typically, the characteristics of the oriental face are different from those of the western face, and how can a training algorithm obtain a good "achievement" in the practical application of identifying the oriental face using a data set constructed based on the western face? Aiming at the application of face recognition under the camera, videos of the camera under corresponding conditions are obtained, data sets are produced based on the videos, and the data sets are used for training an algorithm, so that the performance of the algorithm is remarkably improved. Based on the above idea, selectively acquiring video sources is a critical first step.
And then, converting the video into a picture by two methods or ways, wherein the first method is computer picture cutting, namely writing a script, and automatically cutting the picture after identifying the face in the video. The second method is manual image cutting, namely, image cutting processing is performed on the video manually by using video processing software. The specific rule of the first graph cutting is as follows, firstly, a video is divided into video segments, each segment has about 10 minutes, and the 10-minute video is automatically cut by a script and then put into the same folder; secondly, setting a filtering rule, and deleting a folder with any one of the following characteristics, wherein the first characteristic is that more than 50 pictures are contained in the folder, and the second characteristic is that less than 6 pictures are contained in the folder; finally, one person can put the pictures into different folders, and the pictures cut by different persons can be put into different folders.
Compared with an artificial cutting map, the latter has higher flexibility and selectivity, can flexibly perform the work of converting videos into pictures according to different requirements or requirements, and is convenient to make up for some procedural blind points of script cutting maps, for example, the principle of script cutting maps is face detection, but face detection under special conditions is not 100%, which may cause that faces under some conditions cannot be detected in the cutting map stage, and even cannot be made into a data set, and an algorithm cannot be effectively trained to recognize the faces under such conditions.
2. Data processing: the pictures after the image cutting processing are not all pictures meeting the requirements, the primary screening of the images must be carried out in the data processing stage, and the corresponding rules or requirements are as follows: (1) each person face must be collected more than 6 and less than 20; (2) one person cannot repeatedly appear under both folders; (3) the face acquisition must include a front face and side faces (the angle of at least two pictures in 6 pictures is obviously changed), wherein the side faces are +/-30 degrees with the front face as a reference, and the angle of the side faces is left and right, up and down deflected on the basis of the center line of the face; (4) the face is right, the angle of the face right facing the camera is not obviously changed, five sense organs are clearly visible without mosaic, and the face in the picture is required to be above the shoulder; (5) the size requirement is as follows: > 60 x 60; (6) people who often move under the corresponding camera cannot acquire the face of a person, so that data repetition is avoided.
The screened pictures enter a marking/data marking link, reasonable pictures are selected from the folder during marking, and unreasonable pictures are deleted. In fact, the labeling rule at this time is consistent with the preliminary screening rule of the image, which is only stricter in labeling, and each requirement must be met, and the picture of a non-identical person is deleted, and the picture of the identical person is also deleted, so that the final picture is not repeated.
Then, labeling work with gradually increased difficulty is carried out, the data sets are classified according to different actual scenes, data subsets are respectively established, such as people with glasses, people with sunglasses, people with a hat, people in a strong light environment, people in a weak light environment (at night) and people in rainy days, the attached requirement is that the people wearing the hat cannot shade the face of the people, other hats are directly ignored except for caps tightly attached to the head, such as a cricket cap and a velvet cap, and the face representation requirement is required to be met by more than 70%, and the people who wear the umbrella can be directly ignored. The marked files are stored according to two layers of directories, namely a first layer batch number and folders corresponding to a second layer of faces (namely, each group of faces corresponds to one folder).
3. Data checking: the data quality is one of the cores of data labeling, and only high-quality data can train a high-quality algorithm. The standard in quality inspection is actually various requirements in a data processing stage, pictures with quality problems are uniformly processed according to deletion, if the whole independent folder does not meet the rules, the folders are deleted, the final quality is strictly ensured, and an algorithm is prevented from being influenced by data badly.
The data after quality inspection is subjected to face duplicate removal processing, the aim is to eliminate two conditions that the same person is identified as other people and similar persons are identified as one person, firstly, the data is processed by the duplicate removal script, the data with problems is found, then, whether the problem of 'coincidence' exists is judged manually, and finally, the data is processed, and the data with the problems is deleted.
4. Classifying finished products: with the increase of data volume, the database is continuously increased, and especially when the subsequent database is expanded to mass data, the updating of data, the query of data and the management of data are increasingly complex, so that at the beginning of database construction, the finished products are classified by considering future changes, the basic requirement is to conveniently update the value of any field, ensure that the data can be continuously labeled, and further facilitate the expansion of the field so as to meet the requirement of increasing the data type on the convenience of data management and query.
Similarly, according to the different approaches of cutting the chart shown in fig. 1, the final products are also classified separately for manual cutting.
Another aspect of the present invention provides a computer device including a processor for implementing the steps of the data marking method 100 described above when executing a computer program stored in a memory.
Yet another aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the data marking method 100 described above.
In summary, a method is summarized on the basis of long-term data mining and data labeling experience, and on one hand, a process control concept of quality management is drawn on the aspect of process design, and the quality of data production is guaranteed. On the other hand, the internal rules of machine learning are drawn, a data subset under a complex scene is established from the data set, and certain 'blind spots' on the data set and data production are perfected, so that the accuracy of the face recognition algorithm can be improved, the robustness of the face recognition algorithm can be improved, the missing rate can be reduced, and the like. In addition, experience in data management is drawn, and management of complex massive data can be handled.
Those skilled in the art will appreciate that the above embodiments are merely exemplary embodiments and that various changes, substitutions, and alterations can be made without departing from the spirit and scope of the invention.

Claims (10)

1. A data marking method, characterized in that the data marking method comprises:
acquiring a video containing a plurality of characters;
performing image cutting processing on the video containing the multiple people to obtain a preset number of head or face pictures of each person;
and screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics in the head or face pictures with the preset number of each person, and labeling one or more screened head or face pictures.
2. The data marking method as claimed in claim 1, wherein: the preset wearing object comprises glasses, a hat, a scarf or jewelry, the preset light characteristics comprise strong light characteristics or weak light characteristics, and the preset weather characteristics comprise rainy weather, snowy weather, sunny day, cloudy day or cloudy day.
3. The data marking method as claimed in claim 1, further comprising:
deleting any head or face picture which has preset wearing objects in a preset number of head or face pictures of each person and has the proportion that the preset wearing objects shield the heads or the faces being greater than or equal to 30%;
deleting any head or face pictures that are repeated among all head or face pictures of the plurality of persons; and
at least one of all head or face pictures of the plurality of people having any head or face picture with an umbrella feature is deleted.
4. The data marking method as claimed in claim 1, wherein: the step of screening out the head or face pictures with at least one of preset wearing objects, preset light characteristics and preset weather characteristics from the preset number of head or face pictures of each person comprises:
and screening out the head or face pictures with glasses, sunglasses, hats, strong light, weak light or rainy day characteristics in the preset number of head or face pictures of each person.
5. The data marking method as claimed in claim 1, further comprising:
a preset number of pictures of the head or face of each person are saved into a single folder.
6. The data marking method as claimed in claim 5, further comprising:
all the head or face pictures of a plurality of persons in a plurality of individual folders are subjected to a deduplication process to delete any one of the individual folders having duplicate pictures.
7. The data marking method as claimed in claim 1, wherein: the preset number of face pictures of each person at least comprises a front face picture and side face pictures, wherein the side face pictures are in a range of-30 degrees to +30 degrees by taking the front face pictures as references.
8. The data marking method as claimed in claim 1, wherein: the preset number is greater than or equal to 6 and less than 20; or
The preset number is equal to 6.
9. A computer device, characterized by: the computer arrangement comprises a processor for carrying out the steps of the data marking method as claimed in any one of claims 1 to 8 when executing a computer program stored in a memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program realizes the steps of the data marking method as claimed in any one of claims 1 to 8 when being executed by a processor.
CN201811653221.3A 2018-12-29 2018-12-29 Data marking method, computer device and computer readable storage medium Pending CN111382651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811653221.3A CN111382651A (en) 2018-12-29 2018-12-29 Data marking method, computer device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811653221.3A CN111382651A (en) 2018-12-29 2018-12-29 Data marking method, computer device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111382651A true CN111382651A (en) 2020-07-07

Family

ID=71216708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811653221.3A Pending CN111382651A (en) 2018-12-29 2018-12-29 Data marking method, computer device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111382651A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902962A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Shielding or light source self-adaption human face recognition method and device
CN106991438A (en) * 2017-03-20 2017-07-28 新智认知数据服务有限公司 One kind is based on the interactive facial image attribute labeling methods of MFC
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292252A (en) * 2017-06-09 2017-10-24 南京华捷艾米软件科技有限公司 A kind of personal identification method of autonomous learning
CN107391703A (en) * 2017-07-28 2017-11-24 北京理工大学 The method for building up and system of image library, image library and image classification method
CN107808120A (en) * 2017-09-30 2018-03-16 平安科技(深圳)有限公司 Glasses localization method, device and storage medium
CN107992835A (en) * 2017-12-11 2018-05-04 浙江大学 A kind of glasses image-recognizing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902962A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Shielding or light source self-adaption human face recognition method and device
CN106991438A (en) * 2017-03-20 2017-07-28 新智认知数据服务有限公司 One kind is based on the interactive facial image attribute labeling methods of MFC
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292252A (en) * 2017-06-09 2017-10-24 南京华捷艾米软件科技有限公司 A kind of personal identification method of autonomous learning
CN107391703A (en) * 2017-07-28 2017-11-24 北京理工大学 The method for building up and system of image library, image library and image classification method
CN107808120A (en) * 2017-09-30 2018-03-16 平安科技(深圳)有限公司 Glasses localization method, device and storage medium
CN107992835A (en) * 2017-12-11 2018-05-04 浙江大学 A kind of glasses image-recognizing method

Similar Documents

Publication Publication Date Title
WO2021212659A1 (en) Video data processing method and apparatus, and computer device and storage medium
CN108228114B (en) Control method and storage medium
DE112017006136T5 (en) System and method for CNN layer sharing
DE102018006317A1 (en) Deep neural networks for salient content for efficient segmentation of a digital object
CN108234814B (en) Control method and storage medium
DE102019000675A1 (en) USE A MODEL BASED ON A DEEP NEURONAL NETWORK TO IDENTIFY VISUALLY SIMILAR DIGITAL IMAGES BASED ON USER-SELECTED VISUAL PROPERTIES
DE102020002153A1 (en) Use of an object attribute detecting models for the automatic selection of versions of detected objects in images
DE102018008161A1 (en) Detecting objects using a weakly monitored model
CN105760461A (en) Automatic album establishing method and device
DE112017001311T5 (en) System and method for training an object classifier by machine learning
KR102245501B1 (en) Low-quality CCTV Image Based Object Restoration System Using Deep-learning
CN107688830B (en) Generation method of vision information correlation layer for case serial-parallel
CN108734719A (en) Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks
DE102011003201A1 (en) System for creative image navigation and investigation
DE112013005851T5 (en) Method and system for determining and selecting the best photos
DE112015001656T5 (en) Image processing method and apparatus
CN103116749A (en) Near-infrared face identification method based on self-built image library
CN106886553A (en) A kind of image search method and server
CN108230425A (en) Image processing method, image processing apparatus and storage medium
KR20150112535A (en) Representative image managing apparatus and method
EP3029628A1 (en) Method for generating a training image
DE102012005325A1 (en) Machine image recognition method based on a Kl system
CN105554456B (en) Method for processing video frequency and equipment
DE60033580T2 (en) METHOD AND APPARATUS FOR CLASSIFYING AN IMAGE
CN111429376A (en) High-efficiency digital image processing method with high-precision and low-precision integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211110

Address after: 518057 2nd floor, software building, No. 9, Gaoxin Zhongyi Road, Nanshan District, Shenzhen, Guangdong

Applicant after: KUANG-CHI INSTITUTE OF ADVANCED TECHNOLOGY

Address before: 310000 room 1101, building 14, No. 1008, yearning street, Cangqian street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Hangzhou Guangqi Artificial Intelligence Research Institute

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200707

RJ01 Rejection of invention patent application after publication