CN110689447A - Real-time detection method for social software user published content based on deep learning - Google Patents

Real-time detection method for social software user published content based on deep learning Download PDF

Info

Publication number
CN110689447A
CN110689447A CN201910817334.0A CN201910817334A CN110689447A CN 110689447 A CN110689447 A CN 110689447A CN 201910817334 A CN201910817334 A CN 201910817334A CN 110689447 A CN110689447 A CN 110689447A
Authority
CN
China
Prior art keywords
information
template
neural network
deep neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910817334.0A
Other languages
Chinese (zh)
Inventor
殷磊
胡庆浩
程健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Original Assignee
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences filed Critical Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority to CN201910817334.0A priority Critical patent/CN110689447A/en
Publication of CN110689447A publication Critical patent/CN110689447A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention discloses a method for detecting social software user issued content in real time based on deep learning, and belongs to the technical field of deep learning and image processing. The method comprises the following steps: carrying out template marking to generate marked template information; detecting and classifying character areas of an image to be detected by using a preset deep neural network detection model to generate character area information with categories; and performing template matching according to the labeling template information and the character area information with the category to generate structured information data.

Description

Real-time detection method for social software user published content based on deep learning
Technical Field
The invention belongs to the field of detection of published contents of social software, and particularly relates to a real-time detection system of published contents of a social software user based on deep learning.
Background
With the rapid development of the mobile internet, more and more people share their lives by using social software, users of the social software are rising exponentially, and meanwhile, the user groups are not classified into different ages, and teenagers are always the mastery force for using the social software. In the process of social software development, erotic, violent, bloody smell and other low-custom contents (characters, images, videos and the like) can appear from time to time, which seriously hinders the development of social software and brings extremely bad influence to the society, especially extremely serious coercion to immature teenagers. There is also an increasing pressure to improve the regulatory scrutiny of social software. However, as the user population increases, the number of distributed contents increases, and it is extremely impractical to discriminate the contents one by manpower. Therefore, social software urgently needs a built-in detection tool, and uses the AI to evaluate the content published by the user in real time and quickly after the user presses the publishing button, and can publish the content on the social platform only after the content is qualified.
The traditional character and image detection method has the characteristics of low precision, low speed and the like. With the development of deep learning, more and more practical problems such as text detection and target detection start to use a deep learning technology, the recognition accuracy of the method is greatly improved compared with that of the traditional method, and meanwhile, the speed is greatly improved. At present, aiming at bad and low-custom contents in social software, most of social software still depends on means such as user reporting, then manual screening, network administrator patrol and the like as main means, and automatic detection of characters and images on line algorithm is realized as auxiliary means, so that a plurality of problems exist:
(1) the traditional detection method has low precision and is easy to cause the phenomena of detection omission and the like. Meanwhile, the traditional detection method is too slow, and the subjective experience of a user is influenced.
(2) Detection by human effort has been completely overwhelmed by the exponential growth of published content, with a corresponding increase in cost for social software.
(3) The human supervision cannot work continuously for 24 hours, and the human detection precision is reduced due to various factors such as fatigue and long-time repeated work.
In order to solve the problems, the patent provides a method and a system for detecting social software release contents in real time based on deep learning, the tool can automatically detect and evaluate whether contents to be released by a user contain bad or bad contents in real time and quickly before the user presses a release button, and the release is granted only after the evaluation is qualified, so that a supervision mode with automatic detection as a main mode and manual review as an auxiliary mode is realized.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a social software user published content real-time detection system based on deep learning. And (4) evaluating the contents such as texts, images and videos in real time by using a detection method based on deep learning, and if the contents are not qualified, not allowing the contents to be released. Based on the text of deep learning, the image real-time detection and evaluation technology is to utilize a trained deep learning model to classify the content (text, image, video) to be published by the social software user in real time, and if the input of the model is judged to be bad or low-speed content, the content is not permitted to be published.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, a method for monitoring content posted by a social software user in real time based on deep learning is provided, and the method includes: carrying out template marking to generate marked template information; detecting and classifying character areas of an image to be detected by using a preset deep neural network detection model to generate character area information with categories; and performing template matching according to the labeling template information and the character area information with the category to generate structured information data.
With reference to the first aspect, in a first possible implementation manner, performing template tagging to generate tagged template information includes: and marking the sizes and relative positions of anchor points and non-anchor point character areas of the template, and generating marking template information according to the mapping relation between the entity and the anchor points and the non-anchor point character areas.
With reference to the first aspect, in a second possible implementation manner, detecting and classifying a text region of an image to be detected by using a preset deep neural network detection model, and generating text region information with a category, includes: and detecting the character line of the image to be detected by using a preset deep neural network detection model, and acquiring character area information of the anchor point and non-anchor point character areas through the output categories.
With reference to the first aspect, in a third possible implementation manner, performing template matching according to the labeled template information and the text region information with a category to generate structured information data includes: and taking the information of the labeling template and the information of the character areas with the types as output, matching and positioning anchor points, mapping the character areas with the types to corresponding entities through the anchor points, removing redundant parts including non-text areas and irrelevant texts, and generating structured information.
With reference to the first aspect, the method further includes: and carrying out image preprocessing on the input image to be detected, wherein the image preprocessing comprises image rectification and/or scaling to a uniform size.
With reference to the first aspect, the method further includes: and training to obtain the preset deep neural network detection model.
With reference to the first aspect, the training to obtain the preset deep neural network detection model includes: generating a sample using a sample generation tool; training with the sample; obtaining a preliminary deep neural network detection model; forming data reflux in the detection application to obtain more new samples; and (4) carrying out Fine-tuning on the preliminary deep neural network detection model by using the new sample.
In a second aspect, a text detection and analysis device based on a deep neural network is provided, which includes: the labeling module is used for labeling the template and generating labeled template information; the character region detection module is used for detecting and classifying character regions of the image to be detected by utilizing a preset deep neural network detection model to generate character region information with categories; and the matching module is used for performing template matching according to the labeling template information and the character area information with the category to generate structured information data.
With reference to the second aspect, in a first possible implementation manner, the labeling module is configured to: and marking the sizes and relative positions of anchor points and non-anchor point character areas of the template, and generating marking template information according to the mapping relation between the entity and the anchor points and the non-anchor point character areas.
With reference to the second aspect, in a second possible implementation manner, the text region detection module is configured to: and detecting the character line of the image to be detected by using a preset deep neural network detection model, and acquiring character area information of the anchor point and non-anchor point character areas through the output categories.
With reference to the second aspect, in a third possible implementation manner, the matching module is configured to: and taking the information of the labeling template and the information of the character areas with the types as output, matching and positioning anchor points, mapping the character areas with the types to corresponding entities through the anchor points, removing redundant parts including non-text areas and irrelevant texts, and generating structured information.
With reference to the second aspect and any one of the first to third possible manners of the second aspect, the apparatus further includes an image preprocessing module, configured to perform image preprocessing on the input image to be detected, where the image preprocessing includes image rectification and/or scaling to a uniform size.
With reference to the second aspect and any one of the first to third possible manners of the second aspect, the apparatus further includes a model training module, configured to train and obtain the preset deep neural network detection model.
In combination with the second aspect, the model training module is configured to: generating a sample using a sample generation tool; training with the sample; obtaining a preliminary deep neural network detection model; forming data reflux in the detection application to obtain more new samples; and (4) carrying out Fine-tuning on the preliminary deep neural network detection model by using the new sample.
The invention has the following advantages: 1. and a powerful and advanced deep learning detection framework is used, so that the detection result is more reliable. 2. On the basis of reliability improvement, the waiting time of the user cannot be increased, and the use experience of the supervisor of the user cannot be influenced. 3. Compared with pure manual supervision, the working efficiency is greatly improved, and the uncertain factors caused by using manpower are relieved to a certain extent. 4. The release of bad or low-custom information on social software is greatly reduced, and the method has great significance for purifying network environment and wind and creating a healthy and good growth atmosphere for teenagers.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a schematic diagram of template matching in a preferred embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
The method for detecting the social software release content in real time based on the deep learning provided by the embodiment of the invention detects and classifies the image character areas by utilizing the deep neural network detection model, then performs template matching by combining the labeled template information and the character area information with the classification obtained by detecting and classifying to generate the structured information data, can realize quick and accurate detection and analysis aiming at various fields in the bill image, has the characteristics of real time, accuracy, universality, robustness and scalability on the detection and analysis of the document image, and can be widely applied to the field of detection, analysis and identification of various image texts containing the texts.
Fig. 1 specifically includes the following steps:
step 1: and carrying out template marking to generate marked template information.
Furthermore, the sizes and relative positions of anchor points and non-anchor point character areas of the template are labeled, and the mapping relation between the entity and the anchor points and the non-anchor point character areas is labeled to generate labeled template information. This process is used to label the location and type of all the fields to be identified, including whether it is an anchor, whether the line of text is date, chinese, english, etc. And the generated labeling template information is used for subsequent template matching.
It should be noted that the templates for labeling are not limited to one or two, and multiple templates may be preset according to actual needs.
Step 2: and detecting and classifying character areas of the image to be detected by using a preset deep neural network detection model to generate character area information with categories.
Further, a preset deep neural network detection model is used for detecting the character line of the image to be detected, and character area information of the anchor point character area and the non-anchor point character area is obtained through the output type. And detecting the character lines by using a detection model, acquiring all anchor points through the output categories, screening the anchor points with the highest confidence coefficient in the process, and determining the relative position of the layout through the anchor points. The default deep network detection model may adopt a fast-rcnn network model, a mask-rcnn network model, or any other possible deep neural network model in the prior art, and the embodiments of the present invention are not particularly limited thereto.
The character areas with different scales (character sizes) can be accurately found by adopting the deep neural network object detection framework, the character contents are preliminarily classified, the anchor points are determined, which is equivalent to determining the relative position of each field of the whole layout, and then the required field pair position and content information can be accurately positioned by combining the previous detection result and the relative position of the layout, so that the stable anchor points can realize accurate matching and the accuracy of the subsequent matching steps is improved.
And step 3: and carrying out template matching according to the labeled template information and the character area information with the category to generate structured information data.
Furthermore, the labeling template information and the character area information with the type are used as output, the anchor points are matched and positioned, the character areas with the type are mapped to corresponding entities through the anchor points, redundant parts including non-text areas and irrelevant texts are removed, and structured information is generated. Based on the detected relative position of the line and the anchor point, the type of the line can be determined and the result can be structured after recognition. The structured information can then be output for a corresponding application. The identification process involved here may use a prior art identification model such as tessaract, CRNN, and the embodiments of the present invention are not limited thereto.
Fig. 2 is a schematic diagram of a template matching process according to a preferred embodiment, and as shown in fig. 2, step 3 may further include:
301: matching the template by combining the information of the labeling template and the information of the character area with the category;
302: mapping the character area by combining the labeling template information, the character area information with the category and the anchor point successfully matched;
303, removing the duplication of the mapping result and removing the character area with repeated redundancy;
and 304, structuring the complete detection result according to the template information to complete template matching.
According to the relative position of the character detection region obtained by inspection, the marking template is matched, the non-text region and the irrelevant text are excluded, and the key information can be grasped better than that of a general identification method.
Further, the method for detecting social software published content in real time based on deep learning provided by the embodiment of the present invention includes the following steps in addition to the above steps 1 to 3:
the image preprocessing is carried out on the input image to be detected, the image preprocessing comprises but is not limited to image rectification and/or scaling to a uniform size, and corresponding setting of image preprocessing operation can be carried out according to actual conditions.
In addition, further, the text detection analysis method based on the deep neural network provided by the embodiment of the present invention further includes the following steps:
training to obtain a preset deep neural network detection model, specifically comprising:
generating a sample using a sample generation tool;
training with the sample;
obtaining a preliminary deep neural network detection model;
data reflux is formed in detection application, and more new samples are obtained
And (4) carrying out Fine-tuning on the preliminary deep neural network detection model by using the new sample.
In the above process, the lines of text in the sample are classified (including but not limited to, classification of anchor and non-anchor classes) and then the detection model is trained.
For the video, the images are directly input into the trained model, for the video, the images are input into the trained model after being subjected to on-line image dynamic interception for detection, and if the detected result belongs to a bad or low-custom label, the label is not allowed to be issued.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the true scope of the embodiments of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A social software user published content real-time detection method based on deep learning is characterized in that: the method comprises the following steps: carrying out template marking to generate marked template information; detecting and classifying character areas of an image to be detected by using a preset deep neural network detection model to generate character area information with categories; and performing template matching according to the labeling template information and the character area information with the category to generate structured information data.
2. The method of claim 1, wherein performing template annotation to generate annotated template information comprises: and marking the sizes and relative positions of anchor points and non-anchor point character areas of the template, and generating marking template information according to the mapping relation between the entity and the anchor points and the non-anchor point character areas.
3. The method of claim 1, wherein detecting and classifying text regions of the image to be detected by using a preset deep neural network detection model to generate text region information with categories comprises: and detecting the character line of the image to be detected by using a preset deep neural network detection model, and acquiring character area information of the anchor point and non-anchor point character areas through the output categories.
4. The method of claim 1, wherein performing template matching according to the labeled template information and the text region information with category to generate structured information data comprises: and taking the information of the labeling template and the information of the character areas with the types as output, matching and positioning anchor points, mapping the character areas with the types to corresponding entities through the anchor points, removing redundant parts including non-text areas and irrelevant texts, and generating structured information.
5. The method according to any one of claims 1 to 4, further comprising: and carrying out image preprocessing on the input image to be detected, wherein the image preprocessing comprises image rectification and/or scaling to a uniform size.
6. The method according to any one of claims 1 to 4, further comprising: and training to obtain the preset deep neural network detection model.
7. The method of claim 6, wherein training to obtain the preset deep neural network detection model comprises: generating a sample using a sample generation tool; training by using a sample to obtain a preliminary deep neural network detection model; forming data reflux in the detection application to obtain more new samples; and (4) carrying out Fine-tuning on the preliminary deep neural network detection model by using the new sample.
CN201910817334.0A 2019-08-30 2019-08-30 Real-time detection method for social software user published content based on deep learning Pending CN110689447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910817334.0A CN110689447A (en) 2019-08-30 2019-08-30 Real-time detection method for social software user published content based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910817334.0A CN110689447A (en) 2019-08-30 2019-08-30 Real-time detection method for social software user published content based on deep learning

Publications (1)

Publication Number Publication Date
CN110689447A true CN110689447A (en) 2020-01-14

Family

ID=69107647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910817334.0A Pending CN110689447A (en) 2019-08-30 2019-08-30 Real-time detection method for social software user published content based on deep learning

Country Status (1)

Country Link
CN (1) CN110689447A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695518A (en) * 2020-06-12 2020-09-22 北京百度网讯科技有限公司 Method and device for labeling structured document information and electronic equipment
CN112381086A (en) * 2020-11-06 2021-02-19 厦门市美亚柏科信息股份有限公司 Method and device for outputting image character recognition result in structured mode
CN114611497A (en) * 2022-05-10 2022-06-10 北京世纪好未来教育科技有限公司 Training method of language diagnosis model, language diagnosis method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469047A (en) * 2015-11-23 2016-04-06 上海交通大学 Chinese detection method based on unsupervised learning and deep learning network and system thereof
KR101769918B1 (en) * 2017-05-17 2017-08-21 주식회사 마인드그룹 Recognition device based deep learning for extracting text from images
US20180246873A1 (en) * 2017-02-28 2018-08-30 Cisco Technology, Inc. Deep Learning Bias Detection in Text
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN109086756A (en) * 2018-06-15 2018-12-25 众安信息技术服务有限公司 A kind of text detection analysis method, device and equipment based on deep neural network
KR20190093752A (en) * 2018-01-10 2019-08-12 네이버 주식회사 Method and system for scene text detection using deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469047A (en) * 2015-11-23 2016-04-06 上海交通大学 Chinese detection method based on unsupervised learning and deep learning network and system thereof
US20180246873A1 (en) * 2017-02-28 2018-08-30 Cisco Technology, Inc. Deep Learning Bias Detection in Text
KR101769918B1 (en) * 2017-05-17 2017-08-21 주식회사 마인드그룹 Recognition device based deep learning for extracting text from images
KR20190093752A (en) * 2018-01-10 2019-08-12 네이버 주식회사 Method and system for scene text detection using deep learning
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN109086756A (en) * 2018-06-15 2018-12-25 众安信息技术服务有限公司 A kind of text detection analysis method, device and equipment based on deep neural network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695518A (en) * 2020-06-12 2020-09-22 北京百度网讯科技有限公司 Method and device for labeling structured document information and electronic equipment
US11687704B2 (en) 2020-06-12 2023-06-27 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus and electronic device for annotating information of structured document
CN111695518B (en) * 2020-06-12 2023-09-29 北京百度网讯科技有限公司 Method and device for labeling structured document information and electronic equipment
CN112381086A (en) * 2020-11-06 2021-02-19 厦门市美亚柏科信息股份有限公司 Method and device for outputting image character recognition result in structured mode
CN114611497A (en) * 2022-05-10 2022-06-10 北京世纪好未来教育科技有限公司 Training method of language diagnosis model, language diagnosis method, device and equipment

Similar Documents

Publication Publication Date Title
CN109086756B (en) Text detection analysis method, device and equipment based on deep neural network
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
CN109670494B (en) Text detection method and system with recognition confidence
WO2018006294A1 (en) Exam paper reading system, device and method based on pattern recognition technology
CN110689447A (en) Real-time detection method for social software user published content based on deep learning
Weaver et al. LeafMachine: Using machine learning to automate leaf trait extraction from digitized herbarium specimens
CN110598693A (en) Ship plate identification method based on fast-RCNN
CN111401418A (en) Employee dressing specification detection method based on improved Faster r-cnn
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN110689018A (en) Intelligent marking system and processing method thereof
CN114663904A (en) PDF document layout detection method, device, equipment and medium
CN114218391A (en) Sensitive information identification method based on deep learning technology
CN115019294A (en) Pointer instrument reading identification method and system
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN111160756A (en) Scenic spot assessment method and model based on secondary artificial intelligence algorithm
CN116881395A (en) Public opinion information detection method and device
CN116110066A (en) Information extraction method, device and equipment of bill text and storage medium
CN114005054A (en) AI intelligence system of grading
Zhao et al. Barcode character defect detection method based on Tesseract-OCR
CN114494103A (en) Defect detection method and device
CN110956174A (en) Device number identification method
CN116306576B (en) Book printing error detection system and method thereof
CN117593244A (en) Film product defect detection method based on improved attention mechanism
CN116561332A (en) Software vulnerability feature knowledge extraction method and system based on N-element phrase similarity
CN116884017A (en) Object detection method and device for large visual model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 211000 floor 3, building 3, Qilin artificial intelligence Industrial Park, 266 Chuangyan Road, Nanjing, Jiangsu

Applicant after: Zhongke Nanjing artificial intelligence Innovation Research Institute

Address before: 211000 3rd floor, building 3, 266 Chuangyan Road, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: NANJING ARTIFICIAL INTELLIGENCE CHIP INNOVATION INSTITUTE, INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114