CN113807349B - Multi-view target identification method and system based on Internet of things - Google Patents

Multi-view target identification method and system based on Internet of things Download PDF

Info

Publication number
CN113807349B
CN113807349B CN202111039648.6A CN202111039648A CN113807349B CN 113807349 B CN113807349 B CN 113807349B CN 202111039648 A CN202111039648 A CN 202111039648A CN 113807349 B CN113807349 B CN 113807349B
Authority
CN
China
Prior art keywords
identification
recognition
image
target
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111039648.6A
Other languages
Chinese (zh)
Other versions
CN113807349A (en
Inventor
刘德兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202111039648.6A priority Critical patent/CN113807349B/en
Publication of CN113807349A publication Critical patent/CN113807349A/en
Application granted granted Critical
Publication of CN113807349B publication Critical patent/CN113807349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-view target recognition method and system based on the Internet of things, relates to the technical field of the Internet of things, solves the problems of poor recognition difference and reliability of the existing target recognition technology, and adopts the technical scheme that: establishing an identification contour curve and an identification direction of a target object; calibrating a corresponding acquisition area on the identification contour curve; calculating a main recognition range in the acquisition area, and dividing the acquisition area into a main recognition area and a secondary recognition area according to the main recognition range; dividing the target image information into a main identification image and a secondary identification image; carrying out fusion processing on the primary identification image and the secondary identification image with the intersection area; and reorganizing to form new target image information, and carrying out image recognition according to the new target image information. The method can accurately match and fuse a plurality of target images, has high recognition accuracy and low calculation amount of image fusion data, effectively improves the image recognition efficiency, and provides a basis for quick and accurate recognition of the target objects.

Description

Multi-view target identification method and system based on Internet of things
Technical Field
The invention relates to the technical field of the Internet of things, in particular to a multi-view target identification method and system based on the Internet of things.
Background
Along with the continuous development of application technology of the internet of things, a target object needs to be quickly identified according to a shot image in many application scenes, for example, face recognition, and high requirements are put on the identification speed and accuracy.
At present, the identification of the target object mainly comprises the steps of uploading a video or a picture shot by a shooting terminal to a cloud server, carrying out image processing on the video or the picture by the cloud server, and returning an identification result to the shooting terminal after the target object is identified. The cloud server processes the images generally based on videos or pictures shot by a single shooting terminal, and then projects a two-dimensional image to a three-dimensional image through a three-dimensional reconstruction technology to obtain three-dimensional information of a target object, so that target identification is completed. However, there is a certain difference in accuracy in the view angle range when capturing a video or a picture captured by the terminal, for example, the accuracy of an image area corresponding to the middle of the view angle range in the target image is higher than that of both sides, which results in a certain difference in recognition of the target image. In addition, the partial target object recognition technology fuses the target images acquired from multiple viewpoints to weaken the viewing angle difference, but the deviation of the image area with higher accuracy may occur in the image fusion process, and the reliability of the partial target object recognition technology needs to be further improved.
Therefore, how to research and design a multi-view target recognition method and system based on the Internet of things is a problem which we continuously solve at present
Disclosure of Invention
The invention aims to solve the problems of poor recognition difference and reliability of the existing target recognition technology, and provides a multi-view target recognition method and system based on the Internet of things, which can accurately match and fuse a plurality of target images, have high recognition accuracy and low calculation amount of image fusion data, effectively improve the image recognition efficiency and provide a basis for rapid and accurate recognition of target objects.
The technical aim of the invention is realized by the following technical scheme:
in a first aspect, a multi-view target recognition method based on the internet of things is provided, which includes the following steps:
s101: acquiring target image information acquired by a plurality of viewpoints, and establishing an identification contour curve and an identification direction of a target object according to the target image information;
s102: determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating a corresponding acquisition area on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint;
s103: determining an acquisition deviation angle under a corresponding viewpoint according to the acquisition direction and the recognition direction, calculating a main recognition range in an acquisition region according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition region into a main recognition region and a secondary recognition region according to the main recognition range;
s104: correspondingly dividing the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area;
s105: performing fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, so as to obtain a fusion image;
s106: and recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing to form new target image information, and carrying out image identification according to the new target image information.
Further, the plurality of view points are distributed in the same view angle plane at intervals, and the recognition contour curves are located in the view angle plane.
Further, if the plurality of viewpoints are odd, the contour curve is identified to be constructed according to the target image information acquired by the viewpoint located at the midpoint in the plurality of viewpoints; if the plurality of viewpoints are even, the identification contour curve is constructed together according to the target image information acquired by two viewpoints positioned at two sides of the midpoint in the plurality of viewpoints.
Further, the establishment of the identification direction specifically includes:
calibrating the midpoint of the identification profile curve, and making a tangent line to the midpoint of the identification profile curve;
taking the midpoint as a starting point to serve as an identification vector which is perpendicular to the tangent line and deviates from the viewpoint, and taking the identification vector as an identification direction.
Further, the acquisition direction is the direction in which the middle branching line of the corresponding view point along the view angle range points to the identification contour curve.
Further, the acquisition deviation angle is a deflection angle between the acquisition direction and the identification direction.
Further, the calculation of the main recognition range specifically includes:
inputting the acquired deviation angle into a deviation angle function to calculate and obtain a dividing coefficient of a main recognition range;
calculating to obtain an included angle offset value between the boundary line of the main recognition range and the acquisition direction according to the division coefficient and the view angle range;
and (5) carrying out symmetrical deviation on the acquisition direction by using the included angle offset value to obtain a main recognition range.
Further, the deviation angle function is specifically:
Figure BDA0003248777960000031
wherein θ 0 The included angle deviation value theta between the boundary line of the main identification range and the acquisition direction 1 And the angle of view range is theta, the acquired deviation angle is theta+K is less than or equal to 90 degrees, and alpha is the value range of the main recognition range.
Further, the fusion image fusion processing specifically includes:
overlapping analysis is carried out on the sub-recognition images to be fused to obtain an intersection area;
intercepting intersection areas from the sub-recognition images to be fused and performing independent fusion to obtain fusion areas;
and splicing the fusion area with the intercepted sub-recognition image and the sub-recognition image to obtain a fusion image.
In a second aspect, a multi-view target recognition system based on internet of things is provided, including:
the image processing module acquires target image information acquired by a plurality of viewpoints and establishes an identification contour curve and an identification direction of a target object according to the target image information;
the region calibration module is used for determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating a corresponding acquisition region on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint;
the area dividing module is used for determining an acquisition deviation angle under the corresponding view point according to the acquisition direction and the recognition direction, calculating a main recognition range in the acquisition area according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition area into a main recognition area and a secondary recognition area according to the main recognition range;
the image segmentation module is used for correspondingly segmenting the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area;
the image fusion module is used for carrying out fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, so as to obtain a fusion image;
and the reconstruction identification module is used for recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing into new target image information and carrying out image identification according to the new target image information.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the target image is divided into the main identification image and the secondary identification image with higher precision according to the relative positions of the target object and the monitoring view point, and the secondary identification image is complementarily fused with the images under other view points, so that the situation that the accuracy of target image identification is reduced due to the fusion between the high-precision main identification image and the high-precision main identification image under different view points is avoided;
2. the intelligent division of the main identification image and the secondary identification image is realized through the deviation angle function, the application range is wide, and the division accuracy is higher;
3. the invention effectively reduces the calculated amount, improves the target recognition efficiency and reduces the network resource waste through the division fusion of the main recognition image and the secondary recognition image and the fusion processing of the intersection region.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the operation of an embodiment of the present invention;
fig. 2 is a system architecture diagram in an embodiment of the invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to fig. 1-2 and embodiments.
Example 1
A multi-view target identification method based on the Internet of things is shown in fig. 1.
S101: acquiring target image information acquired by a plurality of viewpoints, and establishing an identification contour curve and an identification direction of a target object according to the target image information; the view points are distributed in the same view angle plane at intervals, and the recognition contour curves are located in the view angle plane.
If the multiple viewpoints are odd, the contour curves are identified to be constructed according to the target image information acquired by the viewpoints positioned at the midpoints in the multiple viewpoints; if the plurality of viewpoints are even, the identification contour curve is constructed together according to the target image information acquired by two viewpoints positioned at two sides of the midpoint in the plurality of viewpoints.
The establishment of the identification direction is specifically as follows: calibrating the midpoint of the identification profile curve, and making a tangent line to the midpoint of the identification profile curve; taking the midpoint as a starting point to serve as an identification vector which is perpendicular to the tangent line and deviates from the viewpoint, and taking the identification vector as an identification direction B.
S102: and determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating the corresponding acquisition area A on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint.
The acquisition direction C is the direction in which the middle branching line of the view point along the view angle range points to the identification contour curve.
S103: and determining an acquisition deviation angle under the corresponding view point according to the acquisition direction and the identification direction, calculating a main identification range in the acquisition region according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition region into a main identification region M and a secondary identification region N according to the main identification range.
The acquisition deviation angle theta is the deflection angle of the acquisition direction and the identification direction.
The calculation of the main recognition range specifically includes: inputting the acquired deviation angle into a deviation angle function to calculate and obtain a dividing coefficient of a main recognition range; calculating to obtain an included angle offset value between the boundary line of the main recognition range and the acquisition direction according to the division coefficient and the view angle range; and (5) carrying out symmetrical deviation on the acquisition direction by using the included angle offset value to obtain a main recognition range.
The deviation angle function is specifically:
Figure BDA0003248777960000051
wherein θ 0 The included angle deviation value theta between the boundary line of the main identification range and the acquisition direction 1 And the angle of view range is theta, the acquired deviation angle is theta+K is less than or equal to 90 degrees, and alpha is the value range of the main recognition range.
S104: and correspondingly dividing the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area.
S105: and carrying out fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, and obtaining a fusion image.
S106: and recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing to form new target image information, and carrying out image identification according to the new target image information.
The fusion image fusion process specifically comprises the following steps: overlapping analysis is carried out on the sub-recognition images to be fused to obtain an intersection area; intercepting intersection areas from the sub-recognition images to be fused and performing independent fusion to obtain fusion areas; and splicing the fusion area with the intercepted sub-recognition image and the sub-recognition image to obtain a fusion image.
Example 2
The multi-view target recognition system based on the Internet of things comprises an image processing module, a region calibration module, a region dividing module, an image segmentation module, an image fusion module and a reconstruction recognition module as shown in fig. 2.
The image processing module acquires target image information acquired by a plurality of viewpoints and establishes an identification contour curve and an identification direction of a target object according to the target image information. The region calibration module is used for determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating the corresponding acquisition region on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint. The area dividing module is used for determining an acquisition deviation angle under the corresponding view point according to the acquisition direction and the recognition direction, calculating a main recognition range in the acquisition area according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition area into a main recognition area and a secondary recognition area according to the main recognition range. And the image segmentation module is used for correspondingly segmenting the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area. And the image fusion module is used for carrying out fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, so as to obtain a fusion image. And the reconstruction identification module is used for recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing into new target image information and carrying out image identification according to the new target image information.
Working principle: dividing the target image into a main identification image and a secondary identification image with higher precision according to the relative positions of the target object and the monitoring view points, and carrying out supplementary fusion on the secondary identification image and images under other view points, so that the situation that the accuracy of target image identification is reduced due to fusion between the high-precision main identification image and the high-precision main identification image under different view points is avoided; the intelligent division of the main identification image and the secondary identification image is realized through the deviation angle function, so that the application range is wide, and the division accuracy is higher; the method effectively reduces the calculated amount, improves the target recognition efficiency and reduces the network resource waste through the division fusion of the main recognition image and the secondary recognition image and the fusion processing of the intersection region.
The present embodiment is only for explanation of the present invention and is not to be construed as limiting the present invention, and modifications to the present embodiment, which may not creatively contribute to the present invention as required by those skilled in the art after reading the present specification, are all protected by patent laws within the scope of claims of the present invention.

Claims (9)

1. The multi-view target identification method based on the Internet of things is characterized by comprising the following steps of:
s101: acquiring target image information acquired by a plurality of viewpoints, and establishing an identification contour curve and an identification direction of a target object according to the target image information;
s102: determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating a corresponding acquisition area on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint;
s103: determining an acquisition deviation angle under a corresponding viewpoint according to the acquisition direction and the recognition direction, calculating a main recognition range in an acquisition region according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition region into a main recognition region and a secondary recognition region according to the main recognition range;
s104: correspondingly dividing the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area;
s105: performing fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, so as to obtain a fusion image;
s106: recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing to form new target image information, and carrying out image identification according to the new target image information;
the deviation angle function is specifically:
Figure FDA0004172325980000011
wherein θ 0 The included angle deviation value theta between the boundary line of the main identification range and the acquisition direction 1 And the angle of view range is theta, the acquired deviation angle is theta+K is less than or equal to 90 degrees, and alpha is the value range of the main recognition range.
2. The internet of things-based multi-view target recognition method according to claim 1, wherein a plurality of view points are distributed in the same view angle plane at intervals, and a recognition contour curve is located in the view angle plane.
3. The internet of things-based multi-view target recognition method according to claim 1, wherein if a plurality of viewpoints are odd, a recognition contour curve is constructed according to target image information acquired by a viewpoint located at a midpoint among the plurality of viewpoints; if the plurality of viewpoints are even, the identification contour curve is constructed together according to the target image information acquired by two viewpoints positioned at two sides of the midpoint in the plurality of viewpoints.
4. The internet of things-based multi-view target identification method according to claim 1, wherein the establishment of the identification direction is specifically as follows:
calibrating the midpoint of the identification profile curve, and making a tangent line to the midpoint of the identification profile curve;
taking the midpoint as a starting point to serve as an identification vector which is perpendicular to the tangent line and deviates from the viewpoint, and taking the identification vector as an identification direction.
5. The internet of things-based multi-view target recognition method according to claim 1, wherein the collection direction is a direction in which the corresponding view points to a recognition contour curve along a middle branching line of a view angle range.
6. The internet of things-based multi-view target recognition method according to claim 1, wherein the acquisition deviation angle is a deviation angle of an acquisition direction and a recognition direction.
7. The internet of things-based multi-view target recognition method according to claim 1, wherein the calculation of the main recognition range is specifically:
inputting the acquired deviation angle into a deviation angle function to calculate and obtain a dividing coefficient of a main recognition range;
calculating to obtain an included angle offset value between the boundary line of the main recognition range and the acquisition direction according to the division coefficient and the view angle range;
and (5) carrying out symmetrical deviation on the acquisition direction by using the included angle offset value to obtain a main recognition range.
8. The internet of things-based multi-view target recognition method according to claim 1, wherein the fusion image fusion process specifically comprises:
overlapping analysis is carried out on the sub-recognition images to be fused to obtain an intersection area;
intercepting intersection areas from the sub-recognition images to be fused and performing independent fusion to obtain fusion areas;
and splicing the fusion area with the intercepted sub-recognition image and the sub-recognition image to obtain a fusion image.
9. Multi-view target recognition system based on thing networking, characterized by includes:
the image processing module acquires target image information acquired by a plurality of viewpoints and establishes an identification contour curve and an identification direction of a target object according to the target image information;
the region calibration module is used for determining the acquisition direction of the corresponding viewpoint to the target object according to the position information of the viewpoint and the target object, and calibrating a corresponding acquisition region on the identification contour curve according to the view angle range and the acquisition direction of the viewpoint;
the area dividing module is used for determining an acquisition deviation angle under the corresponding view point according to the acquisition direction and the recognition direction, calculating a main recognition range in the acquisition area according to a deviation angle function and the acquisition deviation angle, and dividing the acquisition area into a main recognition area and a secondary recognition area according to the main recognition range;
the image segmentation module is used for correspondingly segmenting the target image information acquired by the corresponding view points into a main identification image and a secondary identification image according to the main identification area and the secondary identification area;
the image fusion module is used for carrying out fusion processing on the secondary identification image obtained by the current viewpoint, the primary identification image and the secondary identification image which are obtained by other viewpoints and have intersection areas, so as to obtain a fusion image;
the reconstruction identification module is used for recombining the fusion image, the main identification image and the secondary identification image which do not participate in fusion processing into new target image information, and carrying out image identification according to the new target image information;
the deviation angle function is specifically:
Figure FDA0004172325980000031
wherein θ 0 The included angle deviation value theta between the boundary line of the main identification range and the acquisition direction 1 And the angle of view range is theta, the acquired deviation angle is theta+K is less than or equal to 90 degrees, and alpha is the value range of the main recognition range.
CN202111039648.6A 2021-09-06 2021-09-06 Multi-view target identification method and system based on Internet of things Active CN113807349B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111039648.6A CN113807349B (en) 2021-09-06 2021-09-06 Multi-view target identification method and system based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111039648.6A CN113807349B (en) 2021-09-06 2021-09-06 Multi-view target identification method and system based on Internet of things

Publications (2)

Publication Number Publication Date
CN113807349A CN113807349A (en) 2021-12-17
CN113807349B true CN113807349B (en) 2023-06-20

Family

ID=78940466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111039648.6A Active CN113807349B (en) 2021-09-06 2021-09-06 Multi-view target identification method and system based on Internet of things

Country Status (1)

Country Link
CN (1) CN113807349B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833785A (en) * 2018-07-03 2018-11-16 清华-伯克利深圳学院筹备办公室 Fusion method, device, computer equipment and the storage medium of multi-view image
CN110738309A (en) * 2019-09-27 2020-01-31 华中科技大学 DDNN training method and DDNN-based multi-view target identification method and system
CN112541930A (en) * 2019-09-23 2021-03-23 大连民族大学 Image super-pixel target pedestrian segmentation method based on cascade connection
CN112949689A (en) * 2021-02-01 2021-06-11 Oppo广东移动通信有限公司 Image recognition method and device, electronic equipment and storage medium
CN113051980A (en) * 2019-12-27 2021-06-29 华为技术有限公司 Video processing method, device, system and computer readable storage medium
CN113313182A (en) * 2021-06-07 2021-08-27 北博(厦门)智能科技有限公司 Target identification method and terminal based on radar and video fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833785A (en) * 2018-07-03 2018-11-16 清华-伯克利深圳学院筹备办公室 Fusion method, device, computer equipment and the storage medium of multi-view image
CN112541930A (en) * 2019-09-23 2021-03-23 大连民族大学 Image super-pixel target pedestrian segmentation method based on cascade connection
CN110738309A (en) * 2019-09-27 2020-01-31 华中科技大学 DDNN training method and DDNN-based multi-view target identification method and system
CN113051980A (en) * 2019-12-27 2021-06-29 华为技术有限公司 Video processing method, device, system and computer readable storage medium
CN112949689A (en) * 2021-02-01 2021-06-11 Oppo广东移动通信有限公司 Image recognition method and device, electronic equipment and storage medium
CN113313182A (en) * 2021-06-07 2021-08-27 北博(厦门)智能科技有限公司 Target identification method and terminal based on radar and video fusion

Also Published As

Publication number Publication date
CN113807349A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
US9360307B2 (en) Structured-light based measuring method and system
CN109472776B (en) Depth significance-based insulator detection and self-explosion identification method
WO2022143237A1 (en) Target positioning method and system, and related device
US20220036589A1 (en) Multispectral camera external parameter self-calibration algorithm based on edge features
CN106225676B (en) Method for three-dimensional measurement, apparatus and system
CN104966270A (en) Multi-image stitching method
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
Zhang et al. Automatic registration of urban aerial imagery with airborne LiDAR data
CN112595262B (en) Binocular structured light-based high-light-reflection surface workpiece depth image acquisition method
CN113807349B (en) Multi-view target identification method and system based on Internet of things
CN111818260B (en) Automatic focusing method and device and electronic equipment
CN110702015B (en) Method and device for measuring icing thickness of power transmission line
CN113052880A (en) SFM sparse reconstruction method, system and application
JP4424797B2 (en) 3D shape detection method
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment
CN113096016A (en) Low-altitude aerial image splicing method and system
CN102567992B (en) Image matching method of occluded area
CN110610503A (en) Three-dimensional information recovery method for power disconnecting link based on stereo matching
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN114998532B (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN110896469A (en) Resolution testing method for three-shot photography and application thereof
CN110211053B (en) Rapid and accurate phase matching method for three-dimensional measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant