CN111797832A - Automatic generation method and system of image interesting region and image processing method - Google Patents

Automatic generation method and system of image interesting region and image processing method Download PDF

Info

Publication number
CN111797832A
CN111797832A CN202010673144.9A CN202010673144A CN111797832A CN 111797832 A CN111797832 A CN 111797832A CN 202010673144 A CN202010673144 A CN 202010673144A CN 111797832 A CN111797832 A CN 111797832A
Authority
CN
China
Prior art keywords
preset
region
image
target
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010673144.9A
Other languages
Chinese (zh)
Other versions
CN111797832B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN202010673144.9A priority Critical patent/CN111797832B/en
Publication of CN111797832A publication Critical patent/CN111797832A/en
Application granted granted Critical
Publication of CN111797832B publication Critical patent/CN111797832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic generation method and system of an image interesting region and an image processing method, which relate to the field of image processing and comprise the following steps: acquiring a first image shot in a preset scene, and marking a preset target in the first image to obtain training data; collecting position information of a preset target in training data; obtaining a first position diagram of a preset target in a preset scene based on the position information of the preset target; training a target recognition network by using the training data to obtain a trained target recognition network; acquiring a second image shot in a preset scene, and recognizing a preset target in the second image by using a trained target recognition network to obtain second position maps of a plurality of preset targets; the method and the device can automatically calibrate the interested region in the image, improve the working efficiency of computer vision software development and save the labor and economic cost.

Description

Automatic generation method and system of image interesting region and image processing method
Technical Field
The invention relates to the field of video image processing, in particular to an automatic generation method and system of an image interesting region and an image processing method.
Background
In the field of image processing, a region of interest (ROI) is an image region selected from an image, which is a region of major interest for image analysis. The area is delineated for further processing. The ROI is used for delineating the target which the user wants to read, so that the processing time can be reduced, and the precision can be increased. In the traditional region of interest division, a range needs to be manually marked on an image in advance, and different scenes need to be marked again.
In the application of big data, developers can use the data after marking extremely large-scale data, and the data marking usually needs manual marking, which has great time and money cost consumption.
If the usage amount of the marking data is directly reduced, the accuracy of the region of interest is directly reduced, and the problem of missing marking occurs. If a large amount of labeling data is used to improve the accuracy of defining the region of interest, a huge cost problem will be caused.
Disclosure of Invention
The invention provides an automatic generation method and system of an image region of interest and an image processing method, which are used for automatically calibrating the region of interest in an image. The method and the system can avoid the trouble that the interested region in the image is manually defined again when different scenes are defined, and improve the working efficiency of computer vision software development. The method and the system reduce the requirement of marking data and save labor and economic cost.
In order to achieve the above object, the present invention provides an automatic generation method of an image region of interest, including:
collecting a plurality of first images shot in a preset scene, and marking preset targets in the first images to obtain training data;
collecting position information of a preset target in training data;
acquiring a first position diagram of a preset target in a preset scene based on the acquired position information of the preset target;
training a target recognition network by using the training data to obtain a trained target recognition network;
acquiring a plurality of second images shot in a preset scene, and recognizing preset targets in the second images by using a trained target recognition network to obtain second position maps of a plurality of preset targets;
and fusing the first position map and the plurality of second position maps to generate the region of interest.
The method comprises the following steps: the method is based on semi-supervised learning and big data theory, identifies the target in the scene through a pre-training network, records the position of the identified target, and automatically generates the interested area of the scene by utilizing Gaussian kernel convolution and related image processing technology. The trouble that the region of interest needs to be manually calibrated again when the region of interest in the images is calibrated in different scenes can be avoided, the working efficiency of computer vision software development can be improved, the requirement for marking data is reduced, and the labor and economic cost is saved. Meanwhile, the usable data volume can be remarkably increased, and therefore the definition precision of the region of interest is improved. The method and the system reduce the engineering difficulty of using the region of interest, and enable the region of interest processing to be more widely used in the engineering practice in the computer vision field, thereby improving the accuracy and efficiency of computer image processing.
Preferably, the number of second images is greater than the number of first images. That is, only a small amount of marked data is needed to complete the processing of a large amount of unmarked data.
Preferably, the method further comprises:
clutter filtering to generate an interested area;
and binarizing the region of interest after clutter filtering.
The clutter filtering processing can effectively eliminate spots in the image, namely, remove some peripheral noise points, improve the definition precision of the region of interest, and then segment the scene into a target region where the target is likely to appear and a background region where the target is unlikely to appear.
Preferably, the clutter filtering generates the region of interest, specifically: and carrying out edge detection on the image, and eliminating the spots with the detected side length less than 5 pixels.
Preferably, the region of interest after the binarization clutter filtering specifically includes: setting the pixels larger than 1 in the image as 1, and setting the rest pixels as 0.
Preferably, the obtaining a first position map of the preset target in the preset scene based on the position information of the preset target specifically includes: based on the position information of the preset target, converting the preset target into a circular spot covering the preset target by utilizing Gaussian kernel convolution, and obtaining a first position diagram of the preset target in a preset scene.
Preferably, the formula for converting the preset target into a circular spot covering the preset target by using gaussian kernel convolution is as follows:
Figure BDA0002583078170000021
wherein G (x, y) is a single target appearance position diagram, x is an abscissa, y is an ordinate, pi is a circumference ratio, sigma is a Gaussian kernel size, and e is a natural constant.
Preferably, the formula for generating the region of interest by fusing the first location map and the plurality of second location maps is as follows:
ROI=∑mG(x,y)
wherein, ROI is the interested region, m is the picture number, and G (x, y) is the single target appearance position map. The first position diagram and the plurality of second position diagrams are fused to generate the region of interest, so that the advantages of big data can be fully utilized, the probability of the position of the target is close to the real situation statistically as much as possible, and the coverage area of the region of interest obtained by the method is more comprehensive and accurate.
Meanwhile, the invention also provides an automatic generation system of the image region of interest, which comprises the following steps:
the training data acquisition unit is used for acquiring a plurality of first images shot in a preset scene, and marking preset targets in the first images to obtain training data;
the preset target position information acquisition unit is used for acquiring position information of a preset target in the training data;
the first position map obtaining unit is used for obtaining a first position map of the preset target in a preset scene based on the collected position information of the preset target;
the training unit is used for training the target recognition network by using the training data to obtain the trained target recognition network;
the second position map obtaining unit is used for acquiring a plurality of second images shot in a preset scene, and recognizing preset targets in the second images by using the trained target recognition network to obtain second position maps of a plurality of preset targets;
and the interested region generating unit is used for fusing the first position map and the plurality of second position maps to generate the interested region.
Wherein, this system still includes:
the clutter filtering unit is used for filtering clutter to generate an interested region;
and the binarization unit is used for binarizing the region of interest after clutter filtering.
The invention also provides an image processing method, which comprises the following steps:
collecting an original image;
processing the original image by adopting the automatic generation method of the image interesting region to obtain the interesting region in the original image;
multiplying the region-of-interest image with the original image, removing a fixed background in the original image, and reserving a region where a target possibly appears to obtain a new image after the region-of-interest is processed;
inputting the new image after the region of interest processing into the neural network for processing, including but not limited to: and identifying, detecting and counting the target.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
the method and the system for automatically demarcating the region of interest based on semi-supervised learning and big data fusion are provided for solving the problems that the conventional region of interest needs to be manually demarcated, the region of interest needs to be demarcated again for different scenes, and time and labor are consumed. The method and the system can automatically define the region outside the background from the scene, solve the problem that software developers need to manually define the region of interest again when facing a new scene, remove the trouble of background interference, and effectively improve the working efficiency and the development progress of the software developers. Meanwhile, the method and the system use a semi-supervised learning mode, so that the demand of marking data can be reduced, the utilization rate of the data is improved, and the cost is saved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention;
FIG. 1 is a flow chart of a method for automatically generating a region of interest in an image according to the present invention;
fig. 2 is a schematic diagram of the automatic generation system of the region of interest in the image according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflicting with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Example one
The embodiment of the invention provides an automatic generation method of an interested area in an image, which is based on semi-supervised learning and big data theory, identifies a target in a scene through a pre-training network, records the position of the identified target, and automatically generates the interested area of the scene by utilizing Gaussian kernel convolution and related image processing technology.
Referring to fig. 1, the automatic generation process of the region of interest is as follows:
collecting a plurality of images shot in a preset scene, and labeling preset targets in the images to obtain training data;
training a target recognition network by using the training data to obtain a trained target recognition network, wherein the target recognition network can recognize a preset target in the image, and the target recognition network can be a traditional VGG, Resnet, SSD or YOLO algorithm network, but also can be a self-defined algorithm network;
collecting the position of a target in the existing training data;
counting target positions in the existing training data; and converting the identified target in the step into a circular patch covering the target by utilizing Gaussian kernel convolution. This results in a first position map of the object that may occur in the predetermined scene. The mathematical principle is as follows:
Figure BDA0002583078170000041
wherein x is an abscissa, y is an ordinate, pi is a circumference ratio, sigma is a Gaussian kernel size, and e is a natural constant.
And expanding the target position data by using a semi-supervised learning principle. And carrying out target identification of the non-interesting area processing picture on the target scene by using a pre-trained network. And acquiring a large number of pictures after identifying the target to obtain a large number of second position pictures containing the target.
And generating a region of interest. And fusing the first position map and the generated second position maps of the targets into one map:
ROI=∑mG(x,y)
wherein, ROI is the interested region, m is the picture number, and G (x, y) is a single target appearance position map.
And performing clutter filtering on the region of interest. And (5) carrying out edge detection on the image, and eliminating the detected spots with the side length smaller than 5 pixels.
And carrying out binarization processing on the region of interest. Setting the pixel value less than 1 in the image as 0, and setting the pixel values of the rest pixels as 1. A region of interest map is obtained that can be used directly to cut the original image.
Figure BDA0002583078170000042
Where a is the pixel value.
Referring to fig. 2, a second embodiment of the present invention provides an automatic generation system of an image region of interest, including:
the training data acquisition unit is used for acquiring a plurality of first images shot in a preset scene, and marking preset targets in the first images to obtain training data;
the preset target position information acquisition unit is used for acquiring position information of a preset target in the training data;
the first position map obtaining unit is used for obtaining a first position map of a preset target in a preset scene based on the position information of the preset target;
the training unit is used for training the target recognition network by using the training data to obtain the trained target recognition network;
the second position map obtaining unit is used for acquiring a plurality of second images shot in a preset scene, and recognizing preset targets in the second images by using the trained target recognition network to obtain second position maps of a plurality of preset targets;
and the interested region generating unit is used for fusing the first position map and the plurality of second position maps to generate the interested region.
Wherein, this system still includes:
the clutter filtering unit is used for filtering clutter to generate an interested region;
and the binarization unit is used for binarizing the region of interest after clutter filtering.
EXAMPLE III
The third embodiment of the invention provides an image processing method, which comprises the following steps:
collecting an original image;
processing the original image by adopting the automatic generation method of the image interesting region to obtain the interesting region in the original image;
multiplying the region-of-interest image with the original image, removing a fixed background in the original image, and reserving a region where a target possibly appears to obtain a new image after the region-of-interest is processed;
inputting the new image after the region of interest processing into the neural network for processing, including but not limited to: and identifying, detecting and counting the target.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An automatic generation method of an image region of interest, characterized in that the method comprises:
collecting a plurality of first images shot in a preset scene, and marking preset targets in the first images to obtain training data;
collecting position information of a preset target in training data;
acquiring a first position diagram of a preset target in a preset scene based on the acquired position information of the preset target;
training a target recognition network by using the training data to obtain a trained target recognition network;
acquiring a plurality of second images shot in a preset scene, and recognizing preset targets in the second images by using a trained target recognition network to obtain second position maps of a plurality of preset targets;
and fusing the first position map and the plurality of second position maps to generate the region of interest.
2. The method of claim 1, wherein the number of the second images is greater than the number of the first images.
3. The method for automatically generating an image region of interest according to claim 1, further comprising:
clutter filtering to generate an interested area;
and binarizing the region of interest after clutter filtering.
4. The method for automatically generating an image region of interest according to claim 3, wherein the clutter filtering generated region of interest is specifically: and carrying out edge detection on the image, and eliminating the spots with the detected side length less than 5 pixels.
5. The method for automatically generating an image region of interest according to claim 3, wherein the binarized clutter filtered region of interest specifically includes: setting the pixels larger than 1 in the image as 1, and setting the rest pixels as 0.
6. The method for automatically generating an image region of interest according to claim 1, wherein the obtaining a first position map of a preset target in a preset scene based on position information of the preset target specifically includes: based on the position information of the preset target, converting the preset target into a circular spot covering the preset target by utilizing Gaussian kernel convolution, and obtaining a first position diagram of the preset target in a preset scene.
7. The method for automatically generating an image region of interest according to claim 6, wherein the formula for transforming the preset target into a circular spot covering the preset target by using the Gaussian kernel convolution is as follows:
Figure FDA0002583078160000011
wherein G (x, y) is a single target appearance position diagram, x is an abscissa, y is an ordinate, pi is a circumference ratio, sigma is a Gaussian kernel size, and e is a natural constant.
8. The method for automatically generating an image region of interest according to claim 1, wherein the formula for generating the region of interest by fusing the first location map and the plurality of second location maps is as follows:
ROI=∑mG(x,y)
wherein, ROI is the interested region, m is the picture number, and G (x, y) is the single target appearance position map.
9. An automatic image region-of-interest generation system, the system comprising:
the training data acquisition unit is used for acquiring a plurality of first images shot in a preset scene, and marking preset targets in the first images to obtain training data;
the preset target position information acquisition unit is used for acquiring position information of a preset target in the training data;
the first position map obtaining unit is used for obtaining a first position map of the preset target in a preset scene based on the collected position information of the preset target;
the training unit is used for training the target recognition network by using the training data to obtain the trained target recognition network;
the second position map obtaining unit is used for acquiring a plurality of second images shot in a preset scene, and recognizing preset targets in the second images by using the trained target recognition network to obtain second position maps of a plurality of preset targets;
and the interested region generating unit is used for fusing the first position map and the plurality of second position maps to generate the interested region.
10. An image processing method, characterized in that the method comprises:
collecting an original image;
processing the original image by adopting the image interesting region automatic generation method of any one of claims 1 to 8 to obtain an interesting region in the original image;
multiplying the region-of-interest image with the original image, removing a fixed background in the original image, and reserving a region where a target possibly appears to obtain a new image after the region-of-interest is processed;
inputting the new image after the region of interest processing into the neural network for processing, including but not limited to: and identifying, detecting and counting the target.
CN202010673144.9A 2020-07-14 2020-07-14 Automatic generation method and system for image region of interest and image processing method Active CN111797832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010673144.9A CN111797832B (en) 2020-07-14 2020-07-14 Automatic generation method and system for image region of interest and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010673144.9A CN111797832B (en) 2020-07-14 2020-07-14 Automatic generation method and system for image region of interest and image processing method

Publications (2)

Publication Number Publication Date
CN111797832A true CN111797832A (en) 2020-10-20
CN111797832B CN111797832B (en) 2024-02-02

Family

ID=72806812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010673144.9A Active CN111797832B (en) 2020-07-14 2020-07-14 Automatic generation method and system for image region of interest and image processing method

Country Status (1)

Country Link
CN (1) CN111797832B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435447A (en) * 2021-07-26 2021-09-24 杭州海康威视数字技术股份有限公司 Image annotation method, device and system
CN113744126A (en) * 2021-08-06 2021-12-03 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177458A (en) * 2013-04-17 2013-06-26 北京师范大学 Frequency-domain-analysis-based method for detecting region-of-interest of visible light remote sensing image
CN104573669A (en) * 2015-01-27 2015-04-29 中国科学院自动化研究所 Image object detection method
US20150206004A1 (en) * 2014-01-20 2015-07-23 Ricoh Company, Ltd. Object tracking method and device
CN104835175A (en) * 2015-05-26 2015-08-12 西南科技大学 Visual attention mechanism-based method for detecting target in nuclear environment
CN107590262A (en) * 2017-09-21 2018-01-16 黄国华 The semi-supervised learning method of big data analysis
CN108629378A (en) * 2018-05-10 2018-10-09 上海鹰瞳医疗科技有限公司 Image-recognizing method and equipment
CN109961834A (en) * 2019-03-22 2019-07-02 上海联影医疗科技有限公司 The generation method and equipment of diagnostic imaging report
CN110503047A (en) * 2019-08-26 2019-11-26 西南交通大学 A kind of rds data processing method and processing device based on machine learning
CN111091127A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Image detection method, network model training method and related device
CN111161227A (en) * 2019-12-20 2020-05-15 成都数之联科技有限公司 Target positioning method and system based on deep neural network
CN111368658A (en) * 2020-02-24 2020-07-03 交通运输部水运科学研究所 Automatic detection method and system for external target of intelligent ship in autonomous navigation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177458A (en) * 2013-04-17 2013-06-26 北京师范大学 Frequency-domain-analysis-based method for detecting region-of-interest of visible light remote sensing image
US20150206004A1 (en) * 2014-01-20 2015-07-23 Ricoh Company, Ltd. Object tracking method and device
CN104573669A (en) * 2015-01-27 2015-04-29 中国科学院自动化研究所 Image object detection method
CN104835175A (en) * 2015-05-26 2015-08-12 西南科技大学 Visual attention mechanism-based method for detecting target in nuclear environment
CN107590262A (en) * 2017-09-21 2018-01-16 黄国华 The semi-supervised learning method of big data analysis
CN108629378A (en) * 2018-05-10 2018-10-09 上海鹰瞳医疗科技有限公司 Image-recognizing method and equipment
CN109961834A (en) * 2019-03-22 2019-07-02 上海联影医疗科技有限公司 The generation method and equipment of diagnostic imaging report
CN110503047A (en) * 2019-08-26 2019-11-26 西南交通大学 A kind of rds data processing method and processing device based on machine learning
CN111091127A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Image detection method, network model training method and related device
CN111161227A (en) * 2019-12-20 2020-05-15 成都数之联科技有限公司 Target positioning method and system based on deep neural network
CN111368658A (en) * 2020-02-24 2020-07-03 交通运输部水运科学研究所 Automatic detection method and system for external target of intelligent ship in autonomous navigation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董志鹏等: "遥感影像目标的尺寸特征卷积神经网络识别法", 《测绘学报》, vol. 48, no. 10, pages 1285 - 1295 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435447A (en) * 2021-07-26 2021-09-24 杭州海康威视数字技术股份有限公司 Image annotation method, device and system
CN113435447B (en) * 2021-07-26 2023-08-04 杭州海康威视数字技术股份有限公司 Image labeling method, device and image labeling system
CN113744126A (en) * 2021-08-06 2021-12-03 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device

Also Published As

Publication number Publication date
CN111797832B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111695486B (en) High-precision direction signboard target extraction method based on point cloud
CN109413411B (en) Black screen identification method and device of monitoring line and server
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
WO2013088175A1 (en) Image processing method
CN110288612B (en) Nameplate positioning and correcting method and device
CN113392856B (en) Image forgery detection device and method
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111797832B (en) Automatic generation method and system for image region of interest and image processing method
CN114049499A (en) Target object detection method, apparatus and storage medium for continuous contour
CN108961262B (en) Bar code positioning method in complex scene
EP3553700A2 (en) Remote determination of containers in geographical region
CN115908988B (en) Defect detection model generation method, device, equipment and storage medium
CN114882204A (en) Automatic ship name recognition method
CN114119695A (en) Image annotation method and device and electronic equipment
CN114155285A (en) Image registration method based on gray level histogram
CN110276260B (en) Commodity detection method based on depth camera
CN115908774B (en) Quality detection method and device for deformed materials based on machine vision
Kaimkhani et al. UAV with Vision to Recognise Vehicle Number Plates
CN111027560B (en) Text detection method and related device
CN114663681A (en) Method for reading pointer type meter and related product
CN111626299A (en) Outline-based digital character recognition method
Liu et al. Identification of Damaged Building Regions from High-Resolution Images Using Superpixel-Based Gradient and Autocorrelation Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 610042 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan

Applicant after: Chengdu shuzhilian Technology Co.,Ltd.

Address before: No.2, floor 4, building 1, Jule road crossing, Section 1, West 1st ring road, Wuhou District, Chengdu City, Sichuan Province 610041

Applicant before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant